There are many different kinds of sensitive data that can be exposed, each with its own particular exploits and consequences. This article will focus on what we have categorized as “systems information,” data that describes digital operations, such as systems inventory, configuration details, data center and cloud design, performance metrics and analyses, application code, and IT business data, such as equipment spend, vendor discount, and budgeting. By better understanding what this data is, why it matters, and what the potential consequences of its exposure are, we can more effectively act to control sensitive information and prevent future data breaches.
During the course of UpGuard’s cyber risk research, we uncover many assets that are publicly readable: cloud storage, file synchronization services, code repositories, and more. Most data exposures occur because of publicly readable assets, where sensitive and confidential data is leaked to the internet at large by way of a permissions misconfiguration. This type of exposure is damaging enough for most organizations— it disrupts the trust between customer and company by leaking personally identifiable information (PII), and often reveals trade secrets and other proprietary data that businesses depend on to maintain a competitive edge. But there’s another type of resource we encounter as well: assets that are publicly writable.
Technology conference season is in full swing, with so many events going on that even large ones like PuppetConf and Amazon Re:Invent have been forced to overlap. While part of the UpGuard team traveled to Las Vegas, two of us stayed in San Francisco for a different style of conference. Far from the madding crowds of general interest vendor-backed extravaganzas, we presented at FinDEVr, a conference with a few hundred people and a sharp focus: improving the technology of financial services.
Introduction Previously we introduced the concept of cloud leaks, and then examined how they happen. Now we’ll take a look at why they matter. To understand the consequences of cloud leaks for the organizations involved, we should first take a close look at exactly what it is that’s being leaked. Then we can examine some of the traditional ways information has been exploited, as well as some new and future threats such data exposures pose.
No, we aren't talking about your burger-inhaling operator passing out on the job, leaving your precious IT assets unattended. You've probably guessed that we're referring to the latest Wendy's data breach announcement: on June 9th, the international fast food chain disclosed that its January 2016 security compromise was, in fact, a lot worse than originally stated—potentially eclipsing the Home Depot and Target data breaches.
The insurance industry has been consistently targeted for cyber attacks as of late, for good reason: sensitive data is at the heart of every process—from handling health insurance claims to archiving medical histories. And because medical records are worth ten times more than credit card information on the black market, firms handling said data are required to take extra precautions in bolstering information security. However, every once in a while hackers are granted freebies—as was the case recently with Systema Software, a small insurance claims management solution provider.
It's not pleasant to think about, but the fact is that when we go to work we are expected to do things. But what are the things that need doing? If we can answer that question without hours of meetings or dozens of emails we can finish our work and do...other things. UpGuard's new Tasks feature provides a lightweight project management system designed especially to maintain quality in a rapidly changing environment.
When we began building a Cyber Risk Research team at UpGuard, we knew there were unavoidable risks. We would be finding and publishing reports on sensitive, exposed data in order to stanch the flow of such private information onto the public internet. It seemed likely the entities involved would not always be pleased, particularly as the majority of the exposures we discovered would be attributable to human error and/or internal process failures. As a team, however, we believed those were risks worth taking. By performing the public good of securing exposures and raising awareness of this problem, UpGuard could help to spur long-term improvements and raise awareness of the growing data security epidemic.
The European Union’s GDPR regulations go into effect in May of this year. In essence, GDPR is a strict data privacy code that holds companies responsible for securing the data they store and process. Although GDPR was approved in April 2016, companies affected by the regulations are still struggling to reach compliance by the May 2018 deadline. A lot of hype has been built up about this systemic unpreparedness, especially in the cybersecurity sector, where GDPR is seen as “the coming storm.” Despite this atmosphere, the main challenge facing GDPR-covered entities remains largely hidden: third-party vendors.
One of the challenges of managing third-party risk is effectively managing large portfolios of vendors. Your business may have hundreds, even thousands of vendors, each used differently and presenting different kinds of information security risks. To help organize and manage your vendors, UpGuard CyberRisk uses a common pattern found in email clients such as Gmail. You can organize your vendors in the way that makes sense to you using labels.
Information technology has changed the way people do business. For better, it has brought speed, scale, and functionality to all aspects of commerce and communication. For worse, it has brought the risks of data exposure, breach, and outage. The damage that can be done to a business through its technology is known as cyber risk, and with the increasing consequences of such incidents, managing cyber risk, especially among third parties, is fast becoming a critical aspect of any organization. The specialized nature of cyber risk requires the translation of technical details into business terms. Security ratings and cyber risk assessments serve this purpose, much like a credit score does for assessing the risk of a loan. But the methodologies employed by solutions in this space vary greatly, as do their results.
Meltdown/Spectre Overview Meltdown and Spectre are critical vulnerabilities affecting a large swathe of processors: “effectively every [Intel] processor since 1995 (except Intel Itanium and Intel Atom before 2013),” as meltdownattack.com puts it. ARM and AMD processors are susceptible to portions of Meltdown, though much less at risk than the affected Intel hardware. Exploiting Meltdown allows attackers to access data from other programs, effectively allowing them to steal whatever data they want.
A Worst Case Scenario This week it was revealed that a severe vulnerability in a majority of processors has existed for nearly ten years, affecting millions of computers around the world, including all the major cloud providers who rely on Intel chips in their data centers. Essentially, this flaw grants complete access to protected memory, including secrets like passwords, from any program on the exploited computer. Even from the web. This flaw is so serious that allegations have already been made that Intel’s CEO sold millions of dollars of stock in the company after the flaw was found, but before it was revealed to the public, the idea being that a vulnerability of this magnitude would be enough to substantially hurt Intel on the market, even though it affects some ARM and AMD processors as well.
Microsoft’s enterprise software powers the majority of large environments. Though often hybridized with open source solutions and third party offerings, the core components of Windows Server, Exchange, and SQL Server form the foundation of many organizations’ data centers. Despite their prevalence in the enterprise, Microsoft systems have also carried a perhaps unfair reputation for insecurity, compared to Linux and other enterprise options. But the insecurities exploited in Microsoft software are overwhelmingly caused by misconfigurations and process errors, not flaws in the technology— patches are not applied on a quick and regular cadence; settings are not hardened according to best practices; dangerous defaults are left in place in production; unused modules and services are not disabled and removed. Microsoft has come a long way to bring its out-of-the-box security up to snuff with its famous usability, not to mention introducing command-line and programmatic methods by which to manage their systems. But even now, the careful control necessary to run a secure and reliable data center on any platform can be difficult to maintain all of the time at scale.
Despite spending billions on cybersecurity solutions, private industry, government and enterprises alike are faced with the continued challenge of preventing data breaches. The reason cybersecurity solutions have not mitigated this problem is that the overwhelming majority of data exposure incidents are due to misconfigurations, typically by way of third-party vendors, not cutting-edge cyber attacks. These misconfigurations are the result of process errors during data handling, and often leave massive datasets completely exposed to the internet for anyone to stumble across.
GitHub is a popular online code repository used by over 26 million people across the world for personal and enterprise uses. GitHub offers a way for people to collaborate on a distributed code base with powerful versioning, merging, and branching features. GitHub has become a common way to outsource the logistics of managing a code base repository so that teams can focus on the coding itself. But as GitHub has become a de facto standard, even among software companies, it has also become a vector for data breaches— the code stored on GitHub ranges from simple student tests to proprietary corporate software worth millions of dollars. Like any server, network device, database, or other digital surface, GitHub suffers from misconfiguration.
Introduction The Internet Footprint There is much more to a company’s internet presence than just a website. Even a single website has multiple facets that operate under the surface to provide the functionality users have become accustomed to. The internet footprint for every company comprises all of their websites, registered domains, servers, IP addresses, APIs, DNS records, certificates, vendors, and other third parties-- anything that is accessible from the internet. The larger the footprint, the more digital surfaces it contains, the more complex are its inner workings, and the more resources it requires to maintain. Because although having an internet presence is basically a given these days, the risk incurred by that presence is not always acknowledged.
The Problem of Digitization The digitization of business has increased the speed of commerce, the scope of customers, the understanding of consumer habits, and the efficiency of operations across the board. It has also increased the risk surface of business, creating new dangers and obstacles for the business itself, not just its technology. This risk is compounded by the interrelations of digital businesses as data handling and technological infrastructure is outsourced, as each third party becomes a vector for breach or exposure for the primary company. The technical nature of this risk makes it inaccessible to those without advanced skills and knowledge, leaving organizations without visibility into an extremely valuable and critical part of the business.
When we think about cyber attacks, we usually think about the malicious actors behind the attacks, the people who profit or gain from exploiting digital vulnerabilities and trafficking sensitive data. In doing so, we can make the mistake of ascribing the same humanity to their methods, thinking of people sitting in front of laptops, typing code into a terminal window. But the reality is both more banal and more dangerous: just like businesses, governments, and other organizations have begun to index data and automate processes, the means of finding and exploiting internet-connected systems are largely performed by computers. There’s no security in obscurity if there’s no obscurity.
Security ratings are like credit ratings, but for the assessment of a company’s web-facing applications. Where a credit rating lets a company determine the risk of lending to a prospective debtor, a security rating lets it decide how risky it will be to deal with another in handling data. The comparison even flattens out when we remember one of the key principles ofcyber resilience: that cyber risk “is actually business risk, and always has been.”
In June of 2017 the U.S. Chamber of Commerce posted the “Principles for Fair and Accurate Security Ratings,” a document supported by a number of organizations interested in the emerging market for measuring cyber risk. The principles provide a starting point for understanding the current state of security ratings and for establishing a shared baseline for assessing vendors in that market.
When we think about cyber attacks, we usually think about the malicious actors behind the attacks, the people who profit or gain from exploiting digital vulnerabilities and trafficking sensitive data. In doing so, we can make the mistake of ascribing the same humanity to their methods, thinking of people sitting in front of laptops, typing code into a terminal window. But the reality is both more banal and more dangerous: just like businesses, governments, and other organizations, hackers have begun to index data and automate hacking processes: the work of finding and exploiting internet-connected systems is largely performed by computers. There’s no security in obscurity if there’s no obscurity.
Guest post by UpGuard engineer Nickolas Littau While running a series of unit tests that make API calls to Amazon Web Services (AWS), I noticed something strange: tests were failing unpredictably. Sometimes all the tests would pass, then on the next run, a few would fail, and the time after that, a different set would fail. The errors I was getting didn’t seem to make any sense:
The way businesses handle the risks posed by their technology is changing. As with anything, adaptability is survivability. When the techniques, methods, and philosophies of the past aren’t working, the time has come to find something better to replace them. Cyber resilience is a set of practices and perspectives that mitigate risk within the processes and workflow of normal operations in order to protect organizations from their own technology and the people who would try to exploit it. This includes all forms of cyber attacks, but also applies to process errors inside the business that put data and assets in danger without outside help.
Technology and Information How much digital technology is required for your business to operate? Unless this document has traveled back in time, the chances are quite a lot. Now consider how much digital technology your vendors require to operate. The scope of technology grows quickly when you consider how vast the interconnected ecosystem of digital business really is. But digital business isn’t just about technology, it’s about information. For many companies, the information they handle is just as critical as the systems that process it, if not more so.
When we examined the differences between breaches, attacks, hacks, and leaks, it wasn’t just an academic exercise. The way we think about this phenomenon affects the way we react to it. Put plainly: cloud leaks are an operational problem, not a security problem. Cloud leaks are not caused by external actors, but by operational gaps in the day-to-day work of the data handler. The processes by which companies create and maintain cloud storage must account for the risk of public exposure.
Making Copies In our first article on cloud leaks, we took a look at what they were and why they should be classified separately from other cyber incidents. To understand how cloud leaks happen and why they are so common, we need to step back and first take a look at the way that leaked information is first generated, manipulated, and used. It’s almost taken as a foregone conclusion that these huge sets of sensitive data exist and that companies are doing something with them, but when you examine the practice of information handling, it becomes clear that organizing a resilient process becomes quite difficult at scale; operational gaps and process errors lead to vulnerable assets, which in turn lead to cloud leaks.
Breaches, Hacks, Leaks, Attacks It seems like every day there’s a new incident of customer data exposure. Credit card and bank account numbers; medical records; personally identifiable information (PII) such as address, phone number, or SSN— just about every aspect of social interaction has an informational counterpart, and the social access this information provides to third parties gives many people the feeling that their privacy has been severely violated when it’s exposed.
One of the challenges of building and running information technology systems is solving novel problems. That's where frameworks like scrum and agile come in– getting from the unknown to the known with a minimum of frustration and waste. Another challenge is performing known tasks correctly every single time. Here runbooks, checklists, and documentation are your friend. And yet, despite a crowded market for IT process automation offerings, misconfigurations and missed patches are still a problem– and not just a problem, but the root cause of 75-99% of outages of breaches depending on platform. Executable Documentation
Nearly all large enterprises use the cloud to host servers, services, or data. Cloud hosted storage, like Amazon's S3, provides operational advantages over traditional computing that allow resources to be automatically distributed across robust and geographically varied servers. However, the cloud is part of the internet, and without proper care, the line separating the two disappears completely in cloud leaks— a major problem when it comes to sensitive information.
Given the complexity of modern information technology, assessing cyber risk can quickly become overwhelming. One of the most pragmatic guides comes from the Center for Internet Security (CIS). While CIS provides a comprehensive list of twenty controls, they also provide guidance on the critical steps that "eliminate the vast majority of your organisation's vulnerabilities." These controls are the foundation of any cyber resilience platform and at the center of UpGuard's capabilities.
Global in scale, with across the board press coverage, the WannaCry ransomware attack has quickly gained a reputation as one of the worst cyber incidents in recent memory. Despite the scale, this attack relied on the same tried and true methods as other successful malware: find exposed ports on the Internet, and then exploit known software vulnerabilities. When put that way, the attack loses its mystique. But there’s still a lot we can learn from this incident, and we’ve summed up the five most important takeaways to keep in mind going forward.
UpGuard makes a cyber resilience platform designed for exactly the realities that necessitate regulations like New York State Department of Financial Services 23 NYCRR 500. On one hand, businesses need to store, processes, and maintain availability for growing stores of valuable data; on the other, the very conditions for market success open them to attacks from increasingly sophisticated and motivated attackers. Balancing these requirements makes a business resilient, and UpGuard provides the visibility, analysis, and automation needed to thrive while satisfying regulations like NYCRR 500.
Why dashboards? Nobody’s perfect. Success is almost always determined through trial and error, learning from mistakes and course-correcting to avoid them in the future. The length of this cycle— from experiment to result, incorporated into future decisions— determines how quickly a trajectory can be altered, which in turn offers more opportunities to succeed. However, capturing and using hard data to make these adjustments is more difficult than it seems. Dashboards visualize real time data and recent trends, giving people insight into whether their efforts are succeeding— assuming they’re using the right metrics.
So I've finally gotten the go-ahead from higher-ups to join the twenty-first century and use cloud hosting. Now I need to prove that running in AWS is not just easier than maintaining our own farm, but more stable and secure. To do this, I need to be able to monitor each of my instances for configuration drift, ensure that they are properly provisioned, and maintain visibility into dependencies like load balancers and security groups. Fortunately, UpGuard provides all of this information, so even if something were to go wrong I could catch it before someone else does.
UpGuard is proud to announce that security expert Chris Vickery is joining our team as a cyber risk analyst, bringing with him a stunning track record of discovering major data breaches and vulnerabilities across the digital landscape. Chris comes to us from his previous role as a digital security researcher, where among other achievements, he discovered a publicly accessible database containing the voter registration records for 93.4 million Mexican citizens, protecting more than seventy percent of the country’s population from the risk of exposure of their personal information.
A funny thing that’s happened as the digitization of business has sped up in the last ten years is that process cadence has not done well in keeping up. Regulatory compliance standards often use quarters, or even years, as audit intervals, and in unregulated industries that interval can be yet longer. But in the data center, changes happen all the time, changing the risk profile of the business along with it. Determining which changes are the root cause of a problem can be the difference between fixing it and having it happen again.
Going from nothing to automation using one of the many tools available can be a daunting task. How can you automate systems when you’re not even 100% sure how they’ve been configured? The documentation is months out of date and the last guy to configure anything on that box has since left the company to ply his trade somewhere that will more fully appreciate his Ops cowboy routine.
Few corporate rivalries are as legendary as these two enterprise contenders; admittedly, there have been more than a fair share of comparisons pitting the pair against each other over the last century. So we're offering a twist to the traditional cola challenge: how do Pepsi and Coke stack up in terms of cyber resilience? Read more to find out.
Leading security researchers have confirmed that the U.S. Air Force (USAF) suffered a massive data breach leading to the exposure of sensitive military data and senior staff information. Here's what you need to know about this latest security failure involving the U.S. government.
On February 18th, 2017, Google security researchers discovered a massive leak in Cloudflare's services that resulted in the exposure of sensitive data belonging to thousands of its customers. Here's what you need to know about the Cloudbleed bug and what can be done to protect your data.
Managing complexity in heterogeneous infrastructures is a challenge faced by all enterprise IT departments, even if their environments are relegated to *NIX or Windows. In the case of the latter, UpGuard's new RSoP/GPO scanning capability streamlines remediation and compliance efforts by enabling Windows operators to easily scan and monitor the disparate security configurations of their Active Directory (AD) instances and Windows endpoints.
As the two leading mobile telecom providers in the U.S., AT&T and Verizon are perpetually at war on almost all fronts—pricing, quality of service, network coverage, and more. But with data breaches at an all time high, security fitness may soon become a critical factor for consumers evaluating wireless service providers. Let's find out how the two compare when it comes to measures of enterprise cyber resilience.
Arby's announced last week that its recently disclosed data breach may impact 355,000 credit card holders that dined at its restaurants between October 2016 and January 2017. Are fast food vendors resilient enough to sustain future cyber attacks and—more importantly—protect consumers against online threats?
Booksellers and electronics retailers aren't the only brick-and-mortar businesses challenged by the rise of highly agile, online-only competitors—traditional retail banking institutions also face stiff competition from Internet-based consumer banking upstarts. But are these born-in-the-cloud banks and financial services offerings safer than their traditional counterparts? Let's take a look at the leading online banks to see if they're equipped to handle today's cyber threats.
With all the conveniences of modern air travel—mobile check-ins, e-gates, in-flight wifi, and more—it's easy to assume that the world's leading airlines have addressed the inherent cyber risks of digitization. But the safety of in-air passengers is just one aspect of airline customer security; are these companies doing their best to protect customers against online security compromises? Let's take a look at the world's leading airlines to find out.
UpGuard's Events systems provides a communication hub to send the data that UpGuard gathers to external systems. Integration between technologies is critical to high performing digital businesses, and UpGuard's Events system provides a simple way to get the information you need the places where you need it.
2016 was arguably the year when cybersecurity events entered into the global stream of consciousness, from the sabotage of national banks to the hacking of elections. And though we're barely into 2017, the breach announcements have already begun: on January 3rd, a data breach was discovered involving the sensitive data of health workers employed by the US military's Special Operations Command (SOCOM). An increase in government-related security incidents is one of our top predictions for 2017—here are 11 other cybersecurity predictions for the new year.
AAA predicts that a record number of Americans will be taking to the skies and roads this holiday season—103 million between Dec. 23-Jan. 2, a 1.5% increase over 2015. 57% of these travel reservations—that's 148 million travellers—booked online. Airfare/hotel/car rental comparison websites are an increasingly popular way to book travel these days, but how good are they at protecting their users' data? Let's take a look at the top 8 online travel aggregators' CSTAR ratings to find out.
At the start of 2015, Gartner predicted that DevOps adoption would evolve from a niche to mainstream enterprise strategy, resulting in 25% of Global 2000 companies drinking its Kool-Aid by 2016. And while the hype—tempered by the realities of implementation—has more or less died down as of late, the methodology's value to enterprises is no longer a debatable matter. Here are some highlights from 2016 detailing how the year panned out for DevOps and its practitioners.
On November 29th, after a high-profile year of published leaks and hacks targeting the Democratic Party, Wikileaks struck once more, albeit against an unexpected target: HBGary Federal, a now-defunct government contracting affiliate of the eponymous cybersecurity firm. It was not a name unfamiliar to online observers; in 2011, HBGary Federal CEO Aaron Barr had boldly claimed to have identified the leading members of internet hacking collective Anonymous, drawing attention from federal investigators eager to identify and arrest the culprits behind DDoS attacks in support of Wikileaks.
Vulnerability assessment is a necessary component of any complete security toolchain, and the most obvious place to start for anyone looking to improve their security. Ironically, starting with vulnerability assessment can actually degrade an organization's overall defense by shifting focus from the cause of most outages and breaches: misconfigurations.
It’s hard to believe Thanksgiving is almost here, and with it, the frenzy of the holiday shopping season fast approaches. Whether you are camping out overnight for “Black Friday” bargains, or waiting for the online deals of “Cyber Monday,” the odds are you are more nervous than ever about the safety and security of your financial information against holiday scammers. At least, so indicate the results of UpGuard’s survey of over 1,200 respondents in November 2016. The survey finds that 95% of consumers are to some degree concerned about the security of their information online, and more than half would break with their favorite brands if they knew their information was at risk; full survey results can be viewed here.
Containers are all the rage these days, and for good reason: technologies such as Docker and CoreOS drastically simplify the packaging and shipping of applications, enabling them to scale without additional hardware or virtual machines. But with these benefits come issues related to management overhead and complexity—namely, how can developers quickly achieve visibility and validate configurations across distributed container clusters? The answer is with UpGuard's new etcd monitoring capabilities.
Several of the world's leading airlines are getting the travel season off to a rocky start: last week, American Airlines and Alaska Airlines resolved a technical glitch causing reservation/check-in and delays across 15 flights. With the holidays approaching, can airlines weather mounting losses caused by their aging computer systems and IT infrastructures?
Recently, New York’s Department of Financial Services and Gov. Andrew Cuomo released their long-awaited proposal for cybersecurity regulations regarding banking and financial services companies. The proposal, if implemented, would be the first mandatory state-level regulations on cybersecurity and promises to deliver sweeping protections to consumers and financial institutions alike. In Gov. Cuomo’s words: "This regulation helps guarantee the financial services industry upholds its obligation to protect consumers and ensure that its systems are sufficiently constructed to prevent cyberattacks to the fullest extent possible."
Government/politics, and cybersecurity—these topics may seem plucked from recent U.S. election headlines, but they're actually themes that have persisted over the last decade, reaching a pinnacle with the massive OPM data breach that resulted in the theft of over 22 million records—fingerprints, social security numbers, personnel information, security-clearance files, and more. Last month, a key government oversight panel issued a scathing 241 page analysis blaming the agency for jeopardizing U.S. national security for generations. The main culprit? Lack of visibility.
This is not an opener for a sex-ed public service announcement, but in fact the million-dollar question for today's enterprise CISOs and CROs: which vendor in the supply chain will prove to be the riskiest bedfellow? With 63% of all data breaches caused directly or indirectly by third party vendors, enterprise measures to bolster cyber resilience must now include the evaluation of partners' security as part of a broader cyber risk management strategy. Easier said than done: most third parties are unlikely to admit to their security shortcomings, and—as it turns out—even if they did, most firms wouldn't believe them anyway.
Last week, leading global ERP vendor SAP was busier than usual in the patch department: it released a record amount of closed issues per month and addressed 48 vulnerabilities—one of them an authentication bypass vulnerability previously left unaddressed for 3 years. Given how mission-critical ERP systems are for centralizing business operations these days, is it safe to assume that ERP vendors are serious about their customers' security? Let's take a look at the leading solution providers in this category to find out.
As enterprises resign themselves to the sobering fact that security compromises are unavoidable, another resulting inevitability is coming into play: ensuing lawsuits and class actions spurred by data breaches and customer data loss. Last week, the Republican presidential nominee's hotel chain and the U.S.' third largest search engine came to terms with this reality. What does the future hold for organizations facing inexorable data breaches coupled with the spectre of resulting litigation?
Does filling out an online survey in exchange for a few bucks sound too good be true? For ClixSense users, this is turning out to be the case: last week, the leading paid-to-click (PTC) survey firm admitted to a massive data breach involving virtually all of its users' accounts—roughly 6.6 million records in total. With so many giving in to the allure of easy money, PTC firms should be on top of securing privileged data of survey takers they're bankrolling. Let's find out how the top 5 compare when it comes to fulfilling this critical responsibility.
For Spotify CEO Daniel Ek, the goal for the rest of 2016 should be simple: don’t rock the boat. The Swedish music streaming service, which is widely expected to go public late next year, is already locked in enough significant conflicts to occupy most of Ek’s waking hours.
Essential to enterprise security, or a waste of time? Security professionals' opinions regarding penetration testing (pen testing) seem to fall squarely on either side of the spectrum, but—as with most IT practices—its efficacy depends on application and scope. And while pen testing alone is never enough to prevent data breaches from occurring, information gleaned from such efforts nonetheless play a critical role in bolstering a firm's continuous security mechanisms.
Leading cloud storage provider Dropbox is arguably having its worst month since launching back in 2007—but with over half a billion users, it's somewhat surprising that serious issues have only begun to surface between the ubiquitous service and the people trusting it with their files. First, in a recent announcement reminiscent of LinkedIn's latest data breach fiasco, Dropbox announced several weeks ago that over 68 million emails and passwords were compromised in a previously disclosed 2012 data breach. And now, security experts are criticizing the company for misleading OS X users into granting admin password access and root privileges to their systems. What recourse do consumers have when cloud services providers "drop the box" on security, or even worse—when their actions directly jeopardize the users they're supposed to protect?
As election year moves into the final stretch, news coverage wouldn't be complete without another mention of a politically motivated data breach or cybersecurity incident. Of course, several months ago the DNC's emails were compromised by hackers, resulting in the theft and exposure of 19,000 hacked emails and related documents. This pales in comparison, however, to the recent FBI announcement of data breaches involving both Illinois and Arizona's voter registration databases. If the controls critical to securing election systems continue to fail, how can participants in the democratic process be sure that their votes won't be hijacked?
When you use the internet, your computer has a conversation with a web server for every site you visit. Everything you submit in a form, any data you enter, becomes part of that conversation. The purpose of encryption is to ensure that nobody except you and the server you’re talking to can understand that conversation, because often sensitive information such as usernames and passwords, credit card data, and social security numbers are part of that conversation. Eavesdropping on these digital conversations and harvesting the personal information contained therein has become a profitable industry. But encryption isn’t an on/off switch. It requires careful configuration. In other words, the padlock isn’t always enough.
Our new digital reputation scan provides a fast and easy way to get a risk assessment for your (or any) business. We look at the same stuff that other external risk assessment tools do– SSL configurations, breach history, SPF records and other domain authenticity markers, blacklists and malware activity. We're happy to offer this service for free, because that information is public and we believe that it's what's inside that really matters. Most of the elements we include in our external assessment are not controversial, but one resulted in arguments lasting several days: the CEO approval rating. In selecting which checks would go into our risk assessment, we here at UpGuard looked at similar site assessment tools and selected only the checks that we thought were relevant to our goal: risk assessment, which overlaps with, but isn't identical to, website best practices. Plus, there are already fine tools for performing those best practices functions, so why duplicate them? We also intentionally omitted checks we thought would not be significant for calculating the risk of data breach and the damage it would cause.
If you regularly use a computer, chances are you spend at least part of your time reading internet news. If you have a subscription, you might even log in and enter your payment info. But how secure are news sites? Here at UpGuard, we took a look at six of the top news media sites on the internet to see how their security stacked up. Many big names had low scores, while a few did very well. What does this mean for the average online news reader?
Whether you’re deploying hundreds of Windows servers into the cloud through code, or handbuilding physical servers for a small business, having a proper method to ensure a secure, reliable environment is crucial to success. Everyone knows that an out-of-the-box Windows server may not have all the necessary security measures in place to go right into production, although Microsoft has been improving the default configuration in every server version. UpGuard presents this ten step checklist to ensure that your Windows servers have been sufficiently hardened against most attacks.
For believers of the old adage love of money is the root of all evil, it comes as no surprise that most data breaches are carried out for financial gain. Verizon's 2016 Data Breach Investigations Report (DBIR) reveals that the 75 percent of cyber attacks appear to have been financially motivated; suffice to say, it's not surprising that ATMs are constantly in the crosshairs of cyber attackers.
Facebook's Mark Zuckerberg, Google's Sundar Pichai, Twitter's Jack Dorsey, what do these three high-flying CEOs have in common? Their social media accounts were all hijacked recently due to bad password habits. To be fair, these breaches occurred indirectly as a result of triggering events—for example, a massive Linkedin data breach led to Zuckerberg's Twitter account getting hijacked, but one thing is for certain: the executive leadership of the world's leading tech companies are as prone to password management mishaps as the rest of us. And—as the latest LastPass vulnerability serves to illustrate—password management solutions may no longer be a safe alternative for memorizing passwords.
Tuesday July 12th is online retail giant Amazon’s self-styled “Prime Day,” and the potential deals mean a surge in online shopping. Designing systems and applications to handle the amount of traffic a site like Amazon sees day to day, much less during promotions like Prime Day, can be difficult in and of itself. Throw in the complexity of cybersecurity and it becomes clear why so many online retailers have trouble keeping up. Amazon itself has relatively good security, but what exactly does that mean for customers? We’ll look at what measures Amazon has in place, what they mean, and a few simple steps to tighten security even further.
You've seen enough Hollywood blockbusters about casino heists to know that gambling institutions are constantly in the crosshairs of attackers—online and off. In the digital realm, however, better malware tools and access to deep funding make today's cyber criminals more than a bad movie, especially when lucrative payloads are for the taking.
There are really only a few ways to get funding: an individual such as a venture capitalist or billionaire, a partnership or strategic investment by a corporation or state agency and getting a large number of people to give you a very small amount of money. Crowdfunding websites claim to offer a platform for the latter, giving inventors, artists and small businesses a method by which to propel themselves on the merits (or popularity) of their ideas, without needing inside connections or extensive business acumen as the other methods usually require. But because all of the transactions involved in crowdfunding take place on the internet, cybersecurity should be a number one concern for both users and operators of these websites. We used our external risk grader to analyze 7 crowdfunding industry leaders and see how they compare to each other and other industries.
Glassdoor's 2016 Employees' Choice Awards Highest Rated CEO List includes household names like Marc Beniof, Mark Zuckerberg, and Tim Cook—CEOs of companies that also score high marks for strong security. Is there any correlation between a company's cyber risk profile and its CEO employee approval rating?
The term cyber risk is often used to describe a business’ overall cybersecurity posture, i.e., at how much risk is this business, given the measures it has taken to protect itself. It’s often coupled with the idea of cyber insurance, the necessary coverage between what a company can do security-wise, and the threats it faces day in and day out. Cybersecurity used to belong exclusively in the realm of Information Technology, one of many business silos that while important, was only a small piece of the business and as such, often delegated to a C-level manager who interfaced with other executives as necessary. Today’s businesses have outgrown this model, as what used to be considered information technology has grown to encompass business itself, permeating every aspect of it, governing its speed, its range, its possibilities. As a CEO or CFO, the way your business handles information technology and begins to foster cyber resilience, reflects the way you think about your company and its place in the contemporary market.
It’s 2016 and you have a cell phone. You also probably pay your cell phone bill online or through an app. Telecom companies handle the world’s communication and part of what that entails is securing that communication to guarantee privacy and integrity to their customers. Here at UpGuard, we scanned ten of the major telecom corporations with our external risk grader to see how their web and email security measured up. These are big money companies with many moving parts, but we’re focusing on the primary web presence a person would consider, for example www.att.com. Turns out there’s some good news and some bad news... depending on which carrier you use.
Yesterday you might have read about Facebook founder and user Mark Zuckerberg’s social media accounts getting “hacked.” Hacked is maybe not the right word here, since many people believe Zuck’s password was among the 117 million leaked LinkedIn passwords recently posted online. If this is true, it means that Zuckerberg used the same password for multiple websites, allowing the damage done by the LinkedIn hack to spread into other areas. If you have or want a job, chances are you also have a LinkedIn account, and if you had one back in 2012, it was probably one of the compromised accounts from that incident. Do you still use that password anywhere? Our 9 step password security checklist will help you secure your accounts, whether you’re a billionaire CEO or just someone who likes to post funny cat videos.
The North American Electric Reliability Corporation (NERC) creates regulations for businesses involved in critical power infrastructure under the guidance and approval of the Federal Energy Regulatory Commission (FERC). A few of these, the Critical Infrastructure Protection (CIP) standards, protect the most important links in the chain and are enforced under penalty of heavy fines for non-compliance. Many of the CIP standards cover cybersecurity, as much of the nation’s infrastructure is now digital. To prove compliance with CIP standards, companies must have a system of record that can be shown to auditors to prove they have enacted the required security measures to protect their cyber assets.
The NERC CIP v5 standards will be enforced beginning in July of this year, but version 6 is already on the horizon. Previously, we examined the differences between v3 and v5, and we saw how the CIPs related to cybersecurity were evolving. This pattern continues in v6, with changes coming to some of the cyber CIPs and the addition of standards regarding “transient cyber assets and removable media,” but the major changes in v6 have to do with scope-- which facilities are required to comply, and at what level they must comply: low, medium or high impact. We’ll examine some of the differences coming up in CIPv6 and what they will mean for the industry.
While it’s not certain that society would become a zombie apocalypse overnight if the power grids failed, it is hard to imagine how any aspect of everyday life would continue in the event of a vast, extended electrical outage. Part of what makes electrical infrastructure resilient against these types of events are the North American Electric Reliability Corporation (NERC) regulatory standards, especially the Critical Infrastructure Protection (CIP) standards, which provide detailed guidelines for both physical and cyber security. The CIP standards evolve along with the available technology and known threats, so they are versioned to provide structured documentation and protocols for companies to move from one iteration of the standards to the next. But the jump from version 3 to version 5 involves many new requirements, so we'll look at some of the differences between the two and what they mean for businesses in the industry.
Salesforce.com's recent day-long outage—what many tech journalists have been referring to as "Outage #NA14"—may actually end up costing the firm $20 million, according financial services firm D.A. Davidson's estimates. The untimely incident occurred just as the company was gearing up to report its Q1 earnings; luckily, $20 million is a drop in the bucket compared to $1.92 billion, Salesforce.com's best first quarter yet. This may be enough to pacify Wall Street analysts, but can the world's largest business SaaS provider sustain another outage of similar proportions or greater?
Whether you’re running Microsoft’s SQL Server (soon to run on Linux) or the open source MySQL, you need to lockdown your databases to keep your data private and secure. These 11 steps will guide you through some of the basic principles of database security and how to implement them. Combined with a hardened web server configuration, a secure database server will keep an application from becoming an entry point into your network and keep your data from ending up dumped on the internet. When provisioning a new SQL server, remember to factor security in from the get-go; it should be a part of your regular process, not something applied retroactively, as some key security measures require fundamental configuration changes for insecurely installed database servers and applications.
Arguably--in that people literally argue about it--there are two types of web servers: traditional servers like Apache and IIS, often backhandedly described as “full-featured,” and “lightweight” servers like Lighttp and nginx, stripped down for optimum memory footprint and performance. Lightweight web servers tend to integrate better into the modern, containerized environments designed for scale and automation. Of these, nginx is a frontrunner, serving major websites like Netflix, Hulu and Pintrest. But just because nginx slams Apache in performance doesn’t mean it’s immune from the same security problems the old heavyweight endures. By following our 15 step checklist, you can take advantage of nginx’s speed and extensibility while still serving websites secured against the most common attacks.
You’ve hardened your servers, locked down your website and are ready to take on the internet. But all your hard work was in vain, because someone fell for a phishing email and wired money to a scammer, while another user inadvertently downloaded and installed malware from an email link that opened a backdoor into the network. Email is as important as the website when it comes to security. As a channel for social engineering, malware delivery and resource exploitation, a combination of best practices and user education should be enacted to reduce the risk of an email-related compromise. By following this 13 step checklist, you can make your email configuration resilient to the most common attacks and make sure it stays that way.
Putting a website on the internet means exposing that website to hacking attempts, port scans, traffic sniffers and data miners. If you’re lucky, you might get some legitimate traffic as well, but not if someone takes down or defaces your site first. Most of us know to look for the lock icon when we're browsing to make sure a site is secure, but that only scratches the surface of what can be done to protect a web server. Even SSL itself can be done many ways, and some are much better than others. Cookies store sensitive information from websites; securing these can prevent impersonation. Additionally, setting a handful of configuration options can protect both your full website presence against both manual and automated cyber attacks, keeping your customer’s data safe from compromise. Here are 13 steps to harden your website and greatly increase the resiliency of your web server.
That’s a nice new Linux server you got there… it would be a shame if something were to happen to it. It might run okay out of the box, but before you put it in production, there are 10 steps you need to take to make sure it’s configured securely. The details of these steps may vary from distribution to distribution, but conceptually they apply to any flavor of Linux. By checking these steps off on new servers, you can ensure that they have at least basic protection against the most common attacks.
There's no arguing that internet retailers have it tough these days: web server vulnerabilities, expiring SSL certificates, PCI DSS compliance, and a host of other issues keep the most vigilant of etailers on their toes—all this, mind you, against a harsh backdrop of increasing cyber threats. Even still, a handful manage to slip up when it comes to the most basic security measures, putting both their infrastructures and the data security of customers at risk. The following is a list of 11 online retailers who should know better.
Your medical records live in a database or file system on servers somewhere, on someone’s network, with someone’s security protecting them. A recent PBS article about cyber security in the healthcare industry reports that over 113 million medical records were compromised in 2015. Medical records, perhaps even more than financial data, are the epitome of sensitive, private data, yet the healthcare industry has reported breach after breach, with over a dozen separate breaches already logged in March of this year.
When it comes to Flash, the only thing you hear more about than its ubiquity are its problems. Despite denunciations from some of technology’s biggest names, Adobe’s Flash player still seems to be everywhere. For almost ten years now, people have been dealing with the security warnings, critical updates and browser incompatibilities for which Flash is infamous. Yet even now, 0-day exploits of Flash’s seemingly unending vulnerabilities threaten users as third-party Flash ads on otherwise trusted websites are used to breach security.
In the last few years, sports betting websites like DraftKings and FanDuel have exploded in popularity and controversy. Anyone who watched last year’s NFL season shouldn’t be surprised that those two sites alone spent over $200M on national television advertising in 2015, amounting to around 60,000 commercials. At the same time, betting sites have been in the news due to their questionable legality and the lawsuits being brought against them from various parties. With March Madness in full effect, people are turning to online gambling sites to place their bets. Aside from the increasing legal resistance these companies face, should users be concerned about the security of sharing their information with these sites? As it turns out, it depends on the site.
Cyber attackers are, above all else, opportunists—malware and viruses require time and resources to develop and are therefore created with the greatest returns in mind. In terms of operating systems, Windows typically gets a bad rap for security—the price of popularity, as it were. But as other OS platforms have whittled down Windows' market share in recent years, cyber attackers have had an increasingly broad playing field for exploitation.
If you're one of its 140 million cardholders around the globe, American Express wants you to know that your data is safe. The data breach recently announced by the U.S.' second largest credit card network reportedly involved a partner merchant and not Amex itself. However, if you're one of the customers whose credit card and personal information was stolen, the difference is negligible.
First circulated in 2009, the CIS Critical Controls are used by both the U.S. and U.K. governments as the preeminent framework for securing critical infrastructures. Consisting of 20 security controls that cover areas from malware defense to incident response and management, the CIS Critical Controls offers a prioritized set of security measures for assessing and improving a firm's security posture. Though not a cybersecurity panacea, the controls help to address the vast majority of security issues faced by organizations today.
Amazon.com suffered a glitch today leaving its website inaccessible for approximately 13 minutes. Seem like a paltry number? Only if these lost minutes aren't translated to sales revenue losses. And while outages with the company's AWS cloud computing offering are not uncommon, Amazon's online retail division—as well as all retailers that transact online—have much at stake literally every minute their websites stay up—or go down.
Chances are you’ve browsed to an online IT community looking for information about a technology. But taking full advantage of them means understanding how they work and what they can do for you. Interaction with a tech community usually happens for one of three reasons:
Fortune recently published an article listing the airlines with the best in-flight wifi service. Coming in at the top of the list with the most onboard wifi connections globally were 3 American carriers: Delta, United, and American Airlines, respectively. But what defines best? Security is clearly not part of the equation, as one journalist famously discovered last week on a domestic American Airlines flight. But then again, if we're talking about wifi and commercial aircraft, all airlines get a failing grade.
When we think of protecting our information online, it’s usually in the context of traditionally sensitive data-- credit card numbers, addresses, SSNs, and so on. But as anyone who has taken a picture of themselves wearing nothing but a smile can tell you, the information exchanged during online dating can be just as personal. I haven’t done that, though. Ever. I have never done it.
The answer is simple: because it's highly profitable. Credit card numbers are still the best we've got for transacting digitally and health records are 10 times more valuable on the black market. And despite efforts from the infosec community at large, cybercrime continues to increase in frequency and severity. The more important and difficult question is not why, but how—that is, how can companies not just survive, but thrive in a landscape of digital threats?
With the rate of data breaches increasing along with the complexity of modern IT infrastructures, the cyber insurance industry has been experiencing significant growing pains. Cyber risk determination had historically been done with employee surveys or contextual information about industries at larger. Without reliable data on an organization’s actual working state, many insurers came to realize there was no way to formulate a fair and accurate cyber insurance policy, especially for more complex and ever-changing IT environments.
From day one at UpGuard, we have been all about visibility. Before you can automate, validate desired or detect unwanted changes, you must first know what your infrastructure looks like; you must have a starting spot. We take the same approach to assessing cyber risk.
For as much as "cyber risk" sounds like a 1990's board game involving robots, cyber risk is actually serious business—in fact, it is continually becoming more important as organizations old and new find themselves relying on a variety of connected technologies and services. And as we enter 2016, the risk of data breaches in particular threatens to hamper business innovation. So what is cyber risk, and what can be done about it?
In what is being described as a landmark case, Nevada-based casino operator Affinity Gaming is suing cybersecurity firm Trustwave for inadequately investigating and containing a 2014 data breach. The lawsuit not only marks the first time a security firm is sued over post-breach remediation efforts—it also highlights the complexities around managing cyber risk for high risk organizations in today's threat landscape.
Call it an experiment gone wrong: a bug in a test feature of the OpenSSH client was found to be highly vulnerable to exploitation today, potentially leaking cryptographic keys to malicious attackers. First discovered and announced by the Qualys Security Team, the vulnerability affects OpenSSH versions 5.4 through 7.1. Here's what you need to know about bug, including remediation tips.
Yes, it's that time of the year again. Time for global electronics vendors and eager enthusiasts from far and wide to converge at the world's largest annual consumer electronics/technology tradeshow. CES 2016 is in full swing, and IoT innovations have unsurprisingly taken center stage once again. Of course, who can forget the debut of Samsung "Smart" Fridge at last year's show, followed by the publicized hacking of the device soon thereafter. Judging by this year's exhibitor turnout, consumers can expect to see more hacked IoT devices making headlines in 2016. The following are the top 7 hackable IoT devices to watch out for at CES this year.
2015 may have come and gone, but the effects of last year's data breaches are far-reaching—for both millions of consumers and internet users as well as the companies and organizations whose systems were breached. Such events are no less devastating in terms of brand damage, and 2016 will undoubtedly bring forth a heightened collective security awareness in both organizations as well as consumers.
It's been barely a month since the VTech data breach resulted in the theft of over 6.4 million children's records, and yet another massive compromise affecting kids' data privacy is upon us—this time involving venerable children's toy and accessory brand Sanrio (of Hello Kitty fame). The data leak resulted in the exposure of details from more than 3 million user accounts: first/last names, birth dates, genders, countries, and email addresses, all openly available to the public. With children becoming prime targets for cyber criminals seeking low hanging fruit, companies that deal with and manage minors' data are increasingly under pressure to bolster their security controls and practices.
Methodologies and frameworks may come and go, but at the end of the day—tools are what make the IT world go 'round. DevOps is no exception: as the term/practice/movement/[insert-your-descriptor-here] rounds its 6th year since entering public IT vernacular, a bounty of so-called DevOps tools have emerged for bridging development and operations, ostensibly to maximize collaborative efficiencies in the IT and service delivery lifecycle. Subsequently, a common issue these days is not a dearth of competent tools, but how to integrate available tooling into one cohesive toolchain.
Advertising-based revenue models may be a standard facet of today's internet businesses, but firms peddling free/freemium services are still on the hook for providing strong information security to their user bases. In fact, they arguably have an even greater responsibility protect user data than paid-for services. So how do events like yesterday's massive data breach involving free web hosting providing 000webhost transpire? In a word, negligence. Gross negligence, to be precise.
The Network Time Protocol (NTP) has been seeing quite a bit of publicity this year, starting with the NTP Leap Second Bug in June promising—but greatly under delivering—digital calamity of Y2K proportions. Ultimately, the fallout resulted in little more than sporadic Twitter interruptions, but last week newly discovered critical vulnerabilities in the timeworn clock synchronization protocol have increased the urgency of recent NTP-hardening projects like NTPSec.
It's practically a national tradition that Americans collectively spend about one year out of every four obsessing over the group of people who are in the running for a job which is undoubtedly awful to actually have. Every part of their campaign is put under heavy scrutiny—their clothes, their hair, their past, their associations—and today, their websites. Let's examine how candidates are fairing online using data from tools such as BuiltWith, Alexa, Google and Twitter.
Known vulnerability assessment– evaluating a machine's state for the presence of files, packages, configuration settings, etc. that are known to be exploitable– is a solved problem. There are nationally maintained databases of vulnerabilities and freely available repositories of tests for their presence. Search for "free vulnerability scanner" and you'll see plenty of options. So why are breaches due to known vulnerabilities still so common? Why, according the Verizon Data Breach Investigation Report, were 99.9% of the vulnerabilities exploited in data breaches last year over a year old?
UpGuard's core functionality solves a really basic problem– how is everything configured and is it all the same across like nodes– by scanning configuration state and visualizing anomalies. We're pretty happy with how we've solved that problem so we've started expanding to other fundamental problems that deserve elegant solutions. One of those is vulnerability management. Sure, there are ways to detect vulnerabilities today, but they suck to use and are over-priced. Since we have the core architecture in place to scan and evaluate machine state, testing for vulnerabilities is a natural addition.
Though the widely publicized failure of the ObamaCare website (a.k.a Healthcare.gov) back in October of 2013 has all but faded from memory, the public sector’s persistent lag in technological innovation coupled with recent calamitous data breaches means there is no shortage of press fodder for critics. What will it take for the U.S. government to transcend its current dearth of agility and innovation?
By now, news of the Experian/T-Mobile hack has traveled far and wide, stirring up public ire and prompting demands for a broader investigation around the data breach. And while the event is just one of many high profile compromises to make headlines lately, it stands out from the rest for a number of reasons. How does the rising tide of cyber threats impact consumers in a world that revolves so heavily around credit?
Though still a relatively new player on the market, group messaging upstart Slack has steadily expanded its footprint into the business and enterprise arena with its polished, streamlined offering for team collaboration. For the uninitiated, Slack is essentially a tool for collaborating amongst teams—chat rooms on steroids, if you will. And like UpGuard, Slack’s integration capabilities are among its most lauded features. When used in conjunction with each other, the two together can give organizations a highly effective feedback loop for staying on top of system/configuration changes and vulnerabilities.
Technology professionals walk a perpetual tight rope between innovation and security—new computing paradigms emerge and IT security scrambles behind to catch up. Nowhere is this more evident than in cloud computing and the rising frequency of data breaches targeting cloud infrastructures. And as computing enters another transitional epoch—namely the age of the Internet of Things (IoT)—similar challenges are emerging, but with much more at stake this time around.
A rising concern amongst IT professionals is the degree to which security vendors and products are themselves susceptible to compromises. This past weekend critical flaws were discovered in the products of not one, but two leading security vendors: FireEye and Kaspersky Labs. Because all systems are exploitable—even security products—a layered approach to security is crucial for maintaining a strong security posture in today’s cyber landscape. Enterprises heavily reliant on a single monolithic solution are best advised to diversify their security strategies to combat ongoing threats.
For those still holding out for a better alternative to SSL, it’s time to give up the ghost. Though implementations like OpenSSL have seen many a vulnerability as of late, the protocol remains the best ubiquitous technology we have for end-to-end encryption. And with Google’s announcement last year regarding SSL’s impact on a website’s search rankings, the question stands: why are so many organizations still holding out on implementing SSL site-wide?
More than ever, UpGuard provides the ability to know how your environments are changing and to identify the deviations that increase your risk for failed change, outages, and security incidents. Here we quickly cover how UpGuard addresses the needs that every IT organization has through visualizations that allow you to start solving your problems today.
In a news flash buried beneath a slew of other notable security news items, UCLA Health revealed last week it was the victim of a massive data breach that left 4.5 million patient records compromised. Like previous attacks on Anthem and Premera Blue Cross, the intrusion gave hackers access to highly sensitive information: patient names, addresses, date of births, social security numbers, medical conditions, and more. And while matters around healthcare IT have taken center stage as of late, the ineffective security at leading institutions of higher education and research is equally distressing.
For those of you harboring secrets behind a website paywall, a word of warning: your skeletons are now easy targets for cyber criminals and nefarious 3rd parties around the globe. The recent data breach and compromise of 3.5 million Ashley Madison user accounts may turn out to be largest case of broad-scale extortion the world has ever seen, but for many—the outcome is hardly surprising.
The OpenSSL Project Team announced a high severity bug in their open source implementation of SSL today that could allow the bypassing of checks on untrusted certificates (read: man-in-the-middle attacks). Find out which versions of OpenSSL are impacted, and what you need to patch this critical vulnerability.
For those of you planning on enjoying the sunset on June 30, 2015—an extra second of bliss awaits, compliments of the Earth’s inconsistent wobble. However, if Y2K sent you running for the hills, start packing again. Analysts predict technological fallout ranging from undeliverable tweets to outright digital armageddon, but for faithful IT folks with more grounded concerns like SLAs and business continuity, keeping critical systems up and running trump all other concerns. Fortunately, resolving potential issues related to the Leap Second Bug is a fairly straightforward matter—as long as you know what to look for and where to find it.
Full stack development is all the rage these days, and for good reason: developers with both front-end web development skills and back-end/server coding prowess clearly offer substantially more value to their respective organizations. The ability to traverse the entire stack competently also makes interacting and cooperating with operations and security an easier affair—a key tenet of DevOps culture.
The question is indeed a contentious one, never failing to incite heated arguments from all camps. Many ways exist to cut the cake in this regard—WhiteHat Security took a stab at it in a recent edition of its Website Security Statistics Report, where it analyzed statistics around web programming languages and their comparative strengths in security.
When it comes to IT security, how do you roll? Many tools exist, but the fact is that in most cases, to do it right— you have to roll your own. This is especially true in today’s environments, where infrastructures can vary widely in composition from organization to organization. The truth is that factors such as degree of DevOps and Agile adoption, skill set of IT staff, corporate culture, and even line of business come into play when crafting a security solution for an organization. How well these tools align with the organization ultimately dictate the success and failure of a company’s security architecture. And when existing tools don’t fit or don’t work well, sometimes the only option is to build them yourself.
Home Depot. Target. Neiman Marcus. Albertsons. Michaels. Most Americans have shopped at one of these national chains recently. If you’re one of them, your credit card information may already be on the black market. And if you’re a retailer using a POS system, proposed legislation like the The Consumer Privacy Protection Act may hold you financially accountable in the event of a data breach. Here’s the skinny on RAM scraping, and what can be done to prevent it.
Every year, Verizon compiles data from a list of prominent contributors for its annual report highlighting trends and statistics around data breaches and intrusions from the past year. The 70-page Data Breach Investigations Report (DBIR) covers a myriad of data points related to victim demographics, breach trends, attack types, and more. Reviewing these shifting security trends can give indications as to how well-postured one’s organization is against future threats. And just in case you’ve got your hands full patching server vulnerabilities, we’ve done the legwork of expanding on a few critical key points from the report.
The Ponemon Institute just released some unsurprisingly bleak findings in its annual study on healthcare data privacy/security, including data showing deliberate criminal attacks now accounting for most medical data breaches. The report goes on to illustrate how the healthcare industry— sitting on a treasure trove of valuable data— is ill-equipped to counter these attacks. Perhaps forward-thinking enterprise healthcare leaders should start considering DevSecOps as a viable strategy for surviving the perils of the information age.
Yesterday, open source content management system (CMS) WordPress made headlines with the announcement of yet another critical zero day vulnerability. The newly discovered flaw is markedly different than other WordPress vulnerabilities surfacing as of late― in this case, the problem exists in WordPress’ core engine and codebase, rather than 3rd party plugins and extensions. WordPress.org was quick to release a patch to fix the vulnerability and has since advised users to upgrade to WordPress 4.2.1, the latest version of the CMS.
In a widely publicized report released last week titled "FAA Needs a More Comprehensive Approach to Address Cybersecurity As Agency Transitions to NextGen," the US Government Accountability Office (GAO) details the potential vulnerabilities and dangers of offering in-flight wifi services during air transit. By essentially granting customers IP networking capabilities for their devices, airlines may be opening up their avionics systems for attacks:
The fate of CSO John in The Phoenix Project is a good parable for illustrating the dynamic and often conflicted relationship between Security and IT Operations. Security can either become a separate, obscure, and increasingly irrelevant group that everyone else resents–sounds pretty good, huh?–or it can be integrated into broader framework of the development cycle. Security John goes through a mental breakdown before finally understanding how to adapt and survive, but it doesn't have to be that hard.
As a group of concepts, DevOps has converged on several prominent themes including continuous software delivery, automation, and configuration management (CM). These integral pieces often form the pillars of an organization’s DevOps efforts, even as other bigger pieces like overarching best practices and guidelines are still being tried and tested. Being that DevOps is a relatively new paradigm - movement - methodology - [insert your own label here], standards around it have yet to be codified and set in stone. Organizations are left to identify tools and approaches most suitable for their use cases, and will either swear by or disparage them depending on their level of success.
Sarbanes-Oxley (SOX) compliance—it’s like checking for holes in your favorite pair, but with consequences beyond public embarrassment. For publicly traded companies, the ordeal is a bit like income tax preparation for the rest of us: a painful, time-consuming evil that—if not carried out judiciously—may result in penalties and fines. Throw in an additional bonus of prison time for good measure, if you’re a C-level executive and discrepancies are found on your watch. Yes, the SEC is serious about SOX compliance, and you should be, too—especially if you’re in IT.
Audits are one of life’s greatest pleasures, right up there with root canals and childbirth. Firms love them, too; alongside tax audits-- financial audits, records audits, and compliance audits make life splendid for businesses. Unfortunately, compliance is an unwieldy but necessary evil-- that is, unless you’re America’s 2nd biggest health insurer.
We rewrote the UpGuard agent as a connection manager to reap the benefits of agentless monitoring. Why get rid of agents? Because agents must be updated. They are like a free puppy–it's easy to take them home but you have to feed them, take them to the vet, and clean up after them for years afterward. The new connection manager allows for an agentless architecture while keeping all SSH/WinRM activity behind your firewall. It's fast, light, easy to maintain, and secure.
We know you're sick of updating OpenSSL so we'll keep this short. There is a new SSL vulnerability named FREAK with a published proof of concept. FREAK affects a significant portion of websites, including big names like American Express and the NSA. Like POODLE, FREAK takes advantage of support for legacy cryptographic protocols.
In Part 1 of this article, we presented an overview of Amazon AWS and UpGuard, and discussed how the two marry the best in cloud computing and DevOps. We also learned how UpGuard is not just the premier solution for configuration monitoring, control and automation of AWS offerings like EC2 and S3, but can also work with any number of RESTful services. But enough waxing philosophical—time to put theory into action. And what better way than to follow a fictional organization as it sets up UpGuard monitoring for its AWS cloud infrastructure?
Over the years, Amazon has become the poster child for all things cloud-related, and for good reason: as one of the initial vendors to embrace the cloud computing paradigm, they were the first to offer widely accessible commercial cloud infrastructure services when it launched EC2 and S3 as part of AWS back in 2006. And now, almost a decade later, the tech giant continues to dominate with a 27% market share of the cloud services market. It's therefore not surprising that for many, Amazon comes to mind first when thinking of cloud computing.
When we set out to create a cloud-based tool for configuration monitoring, we used the tools we knew and wrote UpGuard using JRuby. For our application, JRuby had many good qualities: getting started only required a one line install, the agent only needed to talk out on port 443, and it was platform agnostic. Using JRuby we demonstrated the value of system visibility, attracted our first cohort of customers, and raised the funds to expand UpGuard. Now we're not only scrapping that agent, we're moving away from agent-based architecture altogether. Here's why.
UpGuard was initially designed to solve the problems we faced every day in the world of enterprise IT. Technical debt, documentation rot, and configuration drift consumed untold hours of our lives. UpGuard was designed to make those problems a thing of the past.
Email is a mission-critical application that is relied on to power business communication and collaboration capabilities on a day-to-day basis. It is a vital component of modern business and being able to send and receive email securely and reliably is of paramount importance. If you were to make a list of applications to track and control configuration changes of, email would be at the top of that list.
Today we're proud to show one of our newest features to UpGuard: support for your CloudFlare powered website. As a next-generation CDN (Content Delivery Network) CloudFlare purports to make your site faster to load, optimize your content, provide a swathe of ridiculously powerful and easy-to-understand security mechanisms, provide exclusive analytics insights and even has an app marketplace. To give you an idea of just how big this Cisco combatant has become: As of 2016, CloudFlare delivers over 1 trillion page views per month The company has at least half a million customers. Claims to have protected those customers from hundreds of billions of incidents Adding your CloudFlare site to UpGuard is easy and enables you to discover, track and control all of your CloudFlare DNS and Zone configuration settings including A, CNAME, MX and SPF records.
Having just started working for UpGuard as a software engineer my journey understanding UpGuard and its place in the IT automation ecosystem is just beginning. This places me in a unique position to provide a series of blog posts that will start from the ground up in getting started with UpGuard. Today we'll work through the steps required to connect and scan a Ubuntu linux server using SSH.
If you’re working with IIS then you know that preventing configuration drift is as important as it is time consuming. In the best case scenario you’re monitoring configs daily to keep development, testing, and deployment running smoothly. In the worst case—well, all-nighters make good war stories but aren’t much fun. A proactive approach is much better. UpGuard automates configuration testing at scale, to find out if your IIS servers are hardened and as expected. We'll look at how UpGuard can help with these five major problems as an example of what we do. Here are the top five critical configuration problems we see on IIS servers and how we fix them.
It's a topic that comes up frequently for us here at UpGuard. Our customers are always keen to know how they can take control and simplify their configuration management processes. We've all experienced at some time or another that issue that was the result of a database migration that didn't complete, a column that has mysteriously changed data type or an old version of a stored proc or view being restored to a new database.
I was perusing through Twitter-land recently and ran across a tweet talking about a DevOps meetup in the Los Angeles area that was underway. And it went on to denote that the first opening question posed to the entire group was: What are the minimum requirements for DevOps? Huh?~!
There's an old idea in Hollywood— if you can't pitch an idea in one sentence, it's too complicated. The term "DevOps" is about 5 years old, and the community still has no consensus on what that word really means, even though it's full of thought leaders who'll claim to be able to tell you.
DevOps is a relatively new concept in comparison to Agile development, so it should come as little surprise that IT enterprises have a myriad of experiences and instances of Agile approaches. And there is no need to throw everything out and start over - both Agile and DevOps are complimentary. But what if after careful deliberation inside of your enterprise you've decided to evolve from Agile to DevOps? How can you ensure that you keep all the good things that Agile provided yet apply some of the learnings from the early adopters of DevOps principles? Building a DevOps state of mind requires more than giving developers root, installing a configuration management tool, using a source code repository, and proclaiming ‘yes, we’re a DevOps shop.” At the end of the day all aspects of the people, process, technology continuums get impacted by DevOps. Here are 5 key steps to work through when implementing DevOps in an IT enterprise where Agile rules:
The rise of DevOps teams is upon us. The most recent State of DevOps survey found that 16% of respondents were part of a DevOps department with 55% of respondents self-identifying as DevOps engineers or systems engineers. Interesting. And if you simply Google ‘DevOps jobs’ you get over 4.5 million hits. So like it or not, this DevOps thing is going mainstream.
Most leading IT enterprises have some form of Agile development in place in their organization. Thereby, many organizations, websites, blogs, and companies exist to provide information about and support for Agile development. Here is a list of 10 key online resources to support your Agile journey.
Puppet Labs just released the 2014 State of DevOps Report. The research team interviewed companies from multiple industries and various sizes, from startups to global firms with over 10,000 employees and had over 9,200 respondents in all. The report shows us that not only is DevOps working within the enterprise, but it is also driving higher employee satisfaction.
It goes without saying that automation in the enterprise is critical to keeping up with today’s dynamic business demands. Unfortunately, automation isn't a set-it-and-forget-it process. You need to carefully monitor the environment to know exactly how much to automate and when to adjust for environment changes. To exasperate the issue, the concept of DevOps is still confusing to many and some still inappropriately equate DevOps to automation. But that isn’t stopping leading enterprises to create automation initiatives, have DevOps skunkworks projects popping up, and to name whole teams DevOps for the sake of it.
For the past 3 months I've been publishing a series of posts around DevOps culture and lessons learned from Patrick Lencioni’s leadership book The Five Dysfunctions of a Team - A Leadership Fable. As much information as is contained here, the reality remains that teamwork ultimately comes down to practicing a small set of principles over a long period of time. DevOps success is not a matter of mastering subtle, sophisticated theory, but rather embracing common sense with uncommon levels of discipline and persistence.
This is the fifth in a series of posts around DevOps culture and lessons learned from Patrick Lencioni’s leadership book The Five Dysfunctions of a Team - A Leadership Fable.
It is almost summertime, so time to dust off your reading material and cozy up with a good book. Recently I asked our expert panel from our most recent DevOps webcast what was their number one resource they would recommend to a friend if they wanted to brush up on the ins-and-outs of Enterprise DevOps. And in truth, they had a hard time narrowing it down to just a few. But if you're looking to stock up your bookshelf on all things DevOps then you can't go wrong with this list of the Top DevOps Reading List.
As it has been said many times: DevOps is not a technical problem, it is a business problem. The struggle for a large, entrenched Enterprise IT shops can't be underestimated and the legacy factor has to be dealt with (aka. why fix something that isn't broken). However, there is mounting evidence to suggest that independent, discrete teams are in fact becoming more common in these large Enterprises. While the fully-embedded model (sometimes called NoOps because there is no visible/distinct Ops team) that the unicorns have deployed work for them, a more discrete team to learn how to 'do DevOps' makes a lot of sense for the larger Enterprise.
As an IT manager or engineer, you can sometimes get so thick in the details that it can be challenging to step back and answer the fundamental questions. Sure, you wrote the scripts that automate your systems. You also train users to understand the tools that implement configuration management. However, you also struggle to answer why your business should have configuration management teams, automation and tools. Don't worry, if this has ever happened to you, remember that you’re not alone.
We received a lot of positive feedback regarding our last article on Controlling SQL Configuration Drift so thought it might be a good idea to continue along that same theme of analysis and follow it up with an article about DNS configuration and some simple steps you can take to prevent configuration drift.
This is the fourth in a series of posts around DevOps culture and lessons learned from Patrick Lencioni’s leadership book The Five Dysfunctions of a Team - A Leadership Fable.
Many large enterprises over the last decade made a deliberate shift to an agile development process as a response to the ever-changing demands of the business. One key tenet of the agile development process is to deliver working software in smaller and more frequent increments, as opposed to the the “big bang” approach of the waterfall method. This is most evident in the agile goal of having potentially shippable features at the end of each sprint.
If you do not feel you have a good handle on all the ways DevOps can benefit your enterprise and bring positive return on investment, you are not alone. While the concept of DevOps dates far back to 2009 (prehistoric times in our world!), the evolution and implementation of the procedures and tools that facilitate its use are still evolving. As has been discussed countless times - DevOps is not something you buy, it is something you do. And in order to 'do DevOps' you need to connect it to your business in a meaningful way to ensure long-term success. But let's pretend for a moment (shouldn't be hard to imagine) that your non-technical resources / upper-level management is holding out on making any changes that bring you closer to the DevOps principles of collaboration, culture and communication. How do you get them to invest in DevOps in your enterprise?
Trying to translate the concept of Configuration Management for those who do not understand its efficacy is like explaining surfing to an Inuit. It is simply not an inherent part of their culture. Without question, the benefits of Configuration Management can be challenging to grasp to the uninformed. One of the best ways to understand the benefits and use cases is to learn from other enterprise's experiences.
Controlling database configuration drift is a tricky subject. It's a topic that comes up frequently for us here at UpGuard and customers are always keen to know how they can go about taking control and simplify their configuration management processes. We've all experienced at some time or another that issue that was the result of a database migration that didn't complete, a column that has mysteriously changed data type or an old version of a stored proc or view being restored to a new database.
I have a confession to make. My first job in IT wasn't as a rails developer in a hot startup. It wasn't managing cloud infrastructure. It didn't involve cool open source projects or cutting edge technology. Quite the opposite in fact. My first job was a graduate trainee analyst programmer at an Australian Funds Manager. What was I trained on? ADABAS NATURAL. Yep, I was a mainframe developer.
This is the third in a series of posts around DevOps culture and lessons learned from Patrick Lencioni’s leadership book The Five Dysfunctions of a Team - A Leadership Fable.
Automation. If you're somewhere on the DevOps spectrum then it's surely good for what ails ya. The answer to all your problems. For many it defines their DevOps journey, its destination representing the promised land of stable environments, consistent builds and silent pagers.
Going from nothing to automation using one of the many tools available can be a daunting task. How can you automate systems when you’re not even 100% sure how they’ve been configured? The documentation is months out of date and the last guy to configure anything on that box has since left the company to ply his trade somewhere that will more fully appreciate his Ops cowboy routine.
One of the easiest ways to build applications programmatically into containers through Docker is to use a Dockerfile. Dockerfiles are the Makefiles of the Docker world. A ton of blog posts and tutorials have sprung up over the last few months about how to set up Docker, or how to set up a LAMP stack and even a LEMP stack in Docker.
We've been working with a lot of Windows shops recently and IIS configuration seems to be a big pain point for many enterprises. Other than a brief stint in mainframe purgatory after university, I started life as a .Net developer and these conversations reminded me of my fun with IIS back in the day. In reflecting on this, I realized that the developer/operations interaction around IIS configuration is a near perfect example of the type of conflict that gave birth to the DevOps movement.
DevOps and I sort of have a love/hate relationship. DevOps is near and dear to our heart here at UpGuard and there are plenty of things that I love about it. Love it or hate it, there is little doubt that it is here to stay. I've enjoyed a great deal of success thanks to agile software development and DevOps methods, but here are 10 things I hate about DevOps!
DevOps is a human problem and a leadership problem. Building a DevOps culture requires more than giving developers root, installing a configuration management tool, using a source code repository, and proclaiming ‘yes, we’re a DevOps shop.” At the end of the day all aspects of the people, process, technology continuums get impacted by DevOps. However, there is little doubt that the people aspect has the most to gain (and the biggest challenges) for anyone who is considering, or already on, the journey to becoming a DevOps ninja.
It is no secret that we here at UpGuard love DevOps and we're not ashamed of it. I know that opinions vary as to what exactly DevOps is or isn't, but the more important part of the movement is whether we as individuals want to push the limits of what we thought was impossible only just a few years ago. We've been 'doing DevOps' for some time and have a cautionary tale to tell as well, but we believe that DevOps can be transformational for IT enterprises and advocate for organizations to activate DevOps in their businesses. I know how we all love lists, so here is my Top 10 Things I Love About DevOps:
I always love going into those meeting rooms where there are different color post-it notes all over the room that looks like a 3M sales rep threw up everywhere. For the longest time I just considered it one of those strange things R&D did. Then one day I was extremely early for a meeting and actually got to spend some time studying what was cluttered all over the glass wall and I began to realize there was a definite method to the madness. This Kanban board concept wasn't just for the engineers, it was for everyone to see where work was being performed and its status. I loved the visual nature of it, and the fact that I could get accurate information without reading release notes or technical requirements documents was refreshing.
Gmail is amazing, but it isn't perfect. In both 2014 and 2016, the popular service suffered severe outages.
My mom used to tell me 'quality over quantity' back in high school when I was dating girls. Of course that meant that I completely ignored her and would date a girl if she was breathing. What in the hell would you expect an awkward 17 year old boy to do?! I've heard that same sentence used in lots of other ways too: when writing, when speaking, when eating, when working out, and so on. What does that have to do with DevOps? As I continue on my journey through the DevOps movement, it seems to me that we have a bit of a conflict here - the goal is to release at a higher velocity (quantity) with well tested code (quality). Is this really possible? I know that some of the 'high-performers' like Amazon, Etsy, Flickr and Netflix are proving that it can be accomplished, but I keep wondering if slowing down can actually help us deliver more extraordinary things.
I've been thinking a lot lately about the intersection of DevOps and Information Security. I'm definitely not the first to have considered the implications, but I am undoubtedly a complete cynic when it comes to InfoSec and how it can align itself to the DevOps movement. Why am I cynic you may ask? Well, I spent almost 10 years in the security/governance arena interacting with CISOs and their teams trying to help them 'reduce risk' and 'pass audits', but I've watched countless organizations fail miserably. What is the main reason why? Because the business fails to see the value of security and doesn't understand it. Better said - the business invests in what the business understands.
We all know that DevOps is the glimmer in our executive's eye, the savior that will solve world hunger, and the most important thing to happen since the wheel was invented. But all joking aside, there is little doubt of the business benefits it can bring to organizations big & small. So now what?! You've decided (or been told) that DevOps is critical to your 2014 success, but where do you start and what are the foundational elements you must work through before claiming victory? Here are 4 prerequisites for DevOps success that you can use as your blueprint to making sure you achieve your business objectives.
Going from nothing to automation using one of the many tools available can be a daunting task. How can you automate systems when you’re not even 100% sure how they’ve been configured? The documentation is months out of date and the last guy to configure anything on that box has since left the company to ply his trade somewhere that will more fully appreciate his Ops cowboy routine.
Tonight I gave a talk on comparing containers and generating Dockerfiles. Instead of providing the slides, which are pretty lame by themselves, I thought I'd write up the talk in a proper context. UpGuard has a number of use cases, one of which highlighted for the talk was migrating the configuration of environments from one location to another. Traditionally we have helped some of our customers scan their configuration state and generate executable tests based on those configuration items as well as allow scanned configuration from multiple machines to be compared.
At UpGuard we've got many decades of experience in large enterprises and are very familiar with the sorts of problems that arise in those sorts of environments. Even for those who have lived through it though, it can be hard to explain to people who haven't. That's why we require all our new employees to read The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win by Gene Kim, Kevin Behr and George Spafford. It does a great - and surprisingly entertaining - job of describing these issues. It also explains how the lessons learnt from years of Lean Manufacturing apply directly to IT. We know that no tool is a silver bullet, but if the employees at Parts Unlimited had UpGuard then it may have been an entirely different story. I've chosen some key excerpts from the book so that we could see how things may have been different.
What is Quality Assurance? Well in time honoured fashion I shall quote directly from wikipedia: Quality assurance (QA) refers to the engineering activities implemented in a quality system so that requirements for a product or service will be fulfilled What does this mean for DevOps though? Well the end product is the software or application being provided so most people focus on its requirements when talking QA and DevOps.
Information Technology Service Management (ITSM) may not have the sex appeal of Agile or the buzz of DevOps, but it lays a crucial foundation for each within the Enterprise today. So, whether you consider it a necessary evil or the only way to run your IT department, here are a few resources that may come in handy.
Most Enterprise CMDB offerings are a joke. They've always been a joke. Just another white elephant system sucking time and money out of IT Budgets. What most, if not all, become are simply inventory systems. They're not even good for that half the time.
As there's a lot of interest out there in the various IT automation tools on offer I thought I'd do a series of blogs covering getting started on each. In particular I wanted to put them to the test regarding how simple it is to go from zero to "Hello World" *. This way I get to play the truly dumb user (not much of a stretch, I know), which is kinda fun too.
You're never safe in Enterprise IT. Just when you feel you've gotten a handle on the last hot topic you're hit with another. SOA, BPM, Agile, ITIL; You feel like screaming "Enough!" but you know resistance is futile. Gartner have said it's important so you know full well that you'll be asked to "do" it by management.
Converging IT development and operations into DevOps have come a long way, and yet, the two should have grown together like Siamese twins. Developers need sysadmins as much as sysadmins need developers. Collaboration is the way winning software and infrastructure are built. And that's all the market wants: effective systems with which to run businesses. DevOps can claim substantial ground today, thanks to the persistence of players from both sides of the sysadmin-developer divide. While the segment is still evolving, various tools have been developed to help the Devs and the Ops collaborate more effectively.
IT testing automation is an important concern of businesses, and a growing field in which IT professionals are able to make a name for themselves. If you are not already involved in automated IT testing, here are a few of the most important skills to have when holding an automation related position.
It's been really interesting to watch the dramatic uptick in activity around the automation space the last year or two. I don't need to go into too much detail on the benefits that automation offers here; consistency and scalability are two of the more prominent that come to mind. What has struck me, though, is that it feels like the way that companies are going about it is missing a key step.
Today represents the hottest time to be in financial markets - nanosecond response times, the ability to affect global markets in real time, and lucrative spot deals in dark pools being all the rage. For companies who do business in these times, it is a technical arms race, worthy of a Reagan era analogy.
Those of us who haven't worked in the Enterprise probably don't know a lot about ITIL (Information Technology Infrastructure Library). ITIL may even be a source of amusement for them. C'mon, they would say, how much practical use can you get from a methodology that is defined through a set of books that is actually referred to as a "library"?
In this blog, we're constantly covering and discussing the concept of DevOps. At this point, most folks in departments related to a company's infrastructure (i.e. Developers, System Administrators) have some understanding of this idea. But where do these people learn about this relatively new and young concept?
DevOps is a concept that has materialized fairly recently, yet is already adored by so many people. Obviously, the fact that it bridges the chasm between software development and operations is pretty exciting, but there seems to be something extra that people love. So without throwing around too many corporate buzzwords (besides “DevOps”, of course), what could that extra something something be?
Software-Defined Networking (SDN) has become a hot topic of late, and with good reason. This technology has the potential to dramatically improve the configuration of networking solutions. Traditionally, data has been housed in a static fashion, with the development of network intelligence, focused on individual routers and switches. This is problematic with today's vast and ever-expanding data pool, with central automation of data management quickly becoming the ideal solution. SDN is an answer to this challenge, and a good one.
Many enterprise network workers are now adopting automation technology as a means of completing operational tasks, and of creating a more efficient environment within an IT enterprise. One of the advantages of adopting IT automation is that it helps to deliver optimal IT management, without the need for any significant capital investment.
One of the best opportunities for networking and keeping up to date on all the latest trends in software development, ITIL best practice implementation, innovative methods of handling automation, new methods of tackling ever-expanding configuration drift, and learning to navigate increasingly tricky compliance issues is attending industry conferences. IT professionals focused on automation will need to read between the lines to find a convention well-suited to their interests. These events are massive in production and are usually more broadly focused.
OK, so I probably just closed out 100 games of Bulls**t Bingo in the title of this blog post but I'll stand by it. You want actual agility in what you do? You need a safety net. That safety net is automated testing.
Configuration testing should not only be an essential step in the overall development process, but also important in the process of installation of new apps for use on web and application servers. Without proper testing, apps can often fail or be open to vulnerabilities. Exposure to attack by hackers or viruses can lead to needless expenses and excessive time spent correcting these problems. It is not unusual for app developers to overlook the need for configuration testing, because they think that using automated methods like Chef, Puppet, or other systems to test the deployment of their products, will work just fine. They feel that by using these fully automated processes, they can test consistency, reproduce outputs adequately and determine if things are working as predicted or not. This kind of thinking can delay a timely product delivery, produce unnecessary costs and create additional workloads to address vulnerabilitiesthat can occur later in production.
There is no disputing the fact that cloud computing has led to a number of remarkable changes in the way many companies do business. Cloud-based solutions have been instrumental in streamlining IT functions and other business processes, resulting in a considerable savings in terms of time and monetary output.
The Sinkhole That is Manual Configuration Testing Testing is a crucial part of software development: it involves the execution of a program with the goal of locating errors. Successful tests are able to uncover new errors that can then be corrected before the software is released.
Testing environment configurations in enterprise environments manually with scripts is difficult, just because there are so many factors involved. These can include applications, hardware, and device compatibility issues that can arise at any point within the implementation, areas which may be difficult to determine in the pre-implementation stage. Worse yet, the larger the network infrastructure, the more time consuming and complicated the test and the implementation processes are. This is when Environment Drift and Stateless Systems can come into play.
Before delivery to the intended party, a system should be tested to figure out whether the requirements set forth in the contract have been met. Configuration acceptance testing is the fundamental means to assuage all doubts that the system will fall short of its intended purposes. It is an essential part of the testing phase of the Software Development Life-Cycle (SDLC), and perhaps the most vital in its category. The way in which the components of the system interact is the sure fire means of determining the susceptibility of the system to frequent errors and ultimately the strength of resistance to its implementation. Configuration acceptance testing is pivotal to the SDLC, and as such will be an integral part of the Application Life-cycle Management (ALM) policy of any firm. It reveals any available bugs and inadequacies in the system, enhancing the process of error correction and formulation of a suitable plan of action in the event undiscovered errors manifest and affect the system after it has been implemented.
This is a pretty common response we get from people we're explaining our product to. There is logic to it but we don't believe it's necessarily reasonable. To illustrate our viewpoint on this we thought we'd paraphrase a conversation we had with a prospective client recently.
You've used Chef/Puppet to automate your infrastructure, you can provision your virtual environment from scratch and deploy all your applications in minute. It’s magical. You've achieved Configuration Management Nirvana. What you've built is repeatable, saves time, increases efficiency and removes human error.
Managing a data center migration? Last week we met with a large cloud provider to discuss how their enterprise customers could use UpGuard Core to accelerate the migration of in-house systems over to their platform.