When we began building a Cyber Risk Research team at UpGuard, we knew there were unavoidable risks. We would be finding and publishing reports on sensitive, exposed data in order to stanch the flow of such private information onto the public internet. It seemed likely the entities involved would not always be pleased, particularly as the majority of the exposures we discovered would be attributable to human error and/or internal process failures. As a team, however, we believed those were risks worth taking. By performing the public good of securing exposures and raising awareness of this problem, UpGuard could help to spur long-term improvements and raise awareness of the growing data security epidemic.
The European Union’s GDPR regulations go into effect in May of this year. In essence, GDPR is a strict data privacy code that holds companies responsible for securing the data they store and process. Although GDPR was approved in April 2016, companies affected by the regulations are still struggling to reach compliance by the May 2018 deadline. A lot of hype has been built up about this systemic unpreparedness, especially in the cybersecurity sector, where GDPR is seen as “the coming storm.” Despite this atmosphere, the main challenge facing GDPR-covered entities remains largely hidden: third-party vendors.
One of the challenges in mitigating third-party risk is effectively managing large portfolios of vendors. Business often have hundreds or thousands of suppliers, each used differently, presenting different kinds of information security risks. To solve this problem, CyberRisk uses a common pattern found in email clients and productivity software, like Gmail and JIRA, to label vendors in the way that makes sense to you.
Information technology has changed the way people do business. For better, it has brought speed, scale, and functionality to all aspects of commerce and communication. For worse, it has brought the risks of data exposure, breach, and outage. The damage that can be done to a business through its technology is known as cyber risk, and with the increasing consequences of such incidents, managing cyber risk, especially among third parties, is fast becoming a critical aspect of any organization. The specialized nature of cyber risk requires the translation of technical details into business terms. Security ratings and cyber risk assessments serve this purpose, much like a credit score does for assessing the risk of a loan. But the methodologies employed by solutions in this space vary greatly, as do their results.
Meltdown/Spectre Overview Meltdown and Spectre are critical vulnerabilities affecting a large swathe of processors: “effectively every [Intel] processor since 1995 (except Intel Itanium and Intel Atom before 2013),” as meltdownattack.com puts it. ARM and AMD processors are susceptible to portions of Meltdown, though much less at risk than the affected Intel hardware. Exploiting Meltdown allows attackers to access data from other programs, effectively allowing them to steal whatever data they want.
A Worst Case Scenario This week it was revealed that a severe vulnerability in a majority of processors has existed for nearly ten years, affecting millions of computers around the world, including all the major cloud providers who rely on Intel chips in their data centers. Essentially, this flaw grants complete access to protected memory, including secrets like passwords, from any program on the exploited computer. Even from the web. This flaw is so serious that allegations have already been made that Intel’s CEO sold millions of dollars of stock in the company after the flaw was found, but before it was revealed to the public, the idea being that a vulnerability of this magnitude would be enough to substantially hurt Intel on the market, even though it affects some ARM and AMD processors as well.
Microsoft’s enterprise software powers the majority of large environments. Though often hybridized with open source solutions and third party offerings, the core components of Windows Server, Exchange, and SQL Server form the foundation of many organizations’ data centers. Despite their prevalence in the enterprise, Microsoft systems have also carried a perhaps unfair reputation for insecurity, compared to Linux and other enterprise options. But the insecurities exploited in Microsoft software are overwhelmingly caused by misconfigurations and process errors, not flaws in the technology— patches are not applied on a quick and regular cadence; settings are not hardened according to best practices; dangerous defaults are left in place in production; unused modules and services are not disabled and removed. Microsoft has come a long way to bring its out-of-the-box security up to snuff with its famous usability, not to mention introducing command-line and programmatic methods by which to manage their systems. But even now, the careful control necessary to run a secure and reliable data center on any platform can be difficult to maintain all of the time at scale.
The government of the Unites States of America is perhaps the largest target on Earth for cyber attacks. The US has plenty of enemies, a track record of perpetrating cyber warfare and espionage (even upon its allies), numerous recent instances of susceptibility to such attacks, countless official documents attesting to its weakness against cyber attacks, and, of course, the US government leads the wealthiest nation with the most powerful military. These facts are not lost on the good people responsible for the well being of American citizens and people all over the world.
Data is being mishandled. Despite spending billions on cybersecurity solutions, data breaches continue to plague private companies, government organizations, and their vendors. The reason cybersecurity solutions have not mitigated this problem is that the overwhelming majority of data exposure incidents are due to misconfigurations, not cutting edge cyber attacks. These misconfigurations are the result of process errors during data handling, and often leave massive datasets completely exposed to the internet for anyone to stumble across.
GitHub is a popular online code repository used by over 26 million people across the world for personal and enterprise uses. GitHub offers a way for people to collaborate on a distributed code base with powerful versioning, merging, and branching features. GitHub has become a common way to outsource the logistics of managing a code base repository so that teams can focus on the coding itself. But as GitHub has become a de facto standard, even among software companies, it has also become a vector for data breaches— the code stored on GitHub ranges from simple student tests to proprietary corporate software worth millions of dollars. Like any server, network device, database, or other digital surface, GitHub suffers from misconfiguration.
Introduction The Internet Footprint There is much more to a company’s internet presence than just a website. Even a single website has multiple facets that operate under the surface to provide the functionality users have become accustomed to. The internet footprint for every company comprises all of their websites, registered domains, servers, IP addresses, APIs, DNS records, certificates, vendors, and other third parties-- anything that is accessible from the internet. The larger the footprint, the more digital surfaces it contains, the more complex are its inner workings, and the more resources it requires to maintain. Because although having an internet presence is basically a given these days, the risk incurred by that presence is not always acknowledged.
The Problem of Digitization The digitization of business has increased the speed of commerce, the scope of customers, the understanding of consumer habits, and the efficiency of operations across the board. It has also increased the risk surface of business, creating new dangers and obstacles for the business itself, not just its technology. This risk is compounded by the interrelations of digital businesses as data handling and technological infrastructure is outsourced, as each third party becomes a vector for breach or exposure for the primary company. The technical nature of this risk makes it inaccessible to those without advanced skills and knowledge, leaving organizations without visibility into an extremely valuable and critical part of the business.
When we think about cyber attacks, we usually think about the malicious actors behind the attacks, the people who profit or gain from exploiting digital vulnerabilities and trafficking sensitive data. In doing so, we can make the mistake of ascribing the same humanity to their methods, thinking of people sitting in front of laptops, typing code into a terminal window. But the reality is both more banal and more dangerous: just like businesses, governments, and other organizations have begun to index data and automate processes, the means of finding and exploiting internet-connected systems are largely performed by computers. There’s no security in obscurity if there’s no obscurity.
Security ratings are like credit ratings, but for the assessment of a company’s web-facing applications. Where a credit rating lets a company determine the risk of lending to a prospective debtor, a security rating lets it decide how risky it will be to deal with another in handling data. The comparison even flattens out when we remember one of the key principles ofcyber resilience: that cyber risk “is actually business risk, and always has been.”
In June of 2017 the U.S. Chamber of Commerce posted the “Principles for Fair and Accurate Security Ratings,” a document supported by a number of organizations interested in the emerging market for measuring cyber risk. The principles provide a starting point for understanding the current state of security ratings and for establishing a shared baseline for assessing vendors in that market.
When we think about cyber attacks, we usually think about the malicious actors behind the attacks, the people who profit or gain from exploiting digital vulnerabilities and trafficking sensitive data. In doing so, we can make the mistake of ascribing the same humanity to their methods, thinking of people sitting in front of laptops, typing code into a terminal window. But the reality is both more banal and more dangerous: just like businesses, governments, and other organizations, hackers have begun to index data and automate hacking processes: the work of finding and exploiting internet-connected systems is largely performed by computers. There’s no security in obscurity if there’s no obscurity.
Guest post by UpGuard engineer Nickolas Littau While running a series of unit tests that make API calls to Amazon Web Services (AWS), I noticed something strange: tests were failing unpredictably. Sometimes all the tests would pass, then on the next run, a few would fail, and the time after that, a different set would fail. The errors I was getting didn’t seem to make any sense:
The way businesses handle the risks posed by their technology is changing. As with anything, adaptability is survivability. When the techniques, methods, and philosophies of the past aren’t working, the time has come to find something better to replace them. Cyber resilience is a set of practices and perspectives that mitigate risk within the processes and workflow of normal operations in order to protect organizations from their own technology and the people who would try to exploit it. This includes all forms of cyber attacks, but also applies to process errors inside the business that put data and assets in danger without outside help.
Technology and Information How much digital technology is required for your business to operate? Unless this document has traveled back in time, the chances are quite a lot. Now consider how much digital technology your vendors require to operate. The scope of technology grows quickly when you consider how vast the interconnected ecosystem of digital business really is. But digital business isn’t just about technology, it’s about information. For many companies, the information they handle is just as critical as the systems that process it, if not more so.
When we examined the differences between breaches, attacks, hacks, and leaks, it wasn’t just an academic exercise. The way we think about this phenomenon affects the way we react to it. Put plainly: cloud leaks are an operational problem, not a security problem. Cloud leaks are not caused by external actors, but by operational gaps in the day-to-day work of the data handler. The processes by which companies create and maintain cloud storage must account for the risk of public exposure.
Introduction Previously we introduced the concept of cloud leaks, and then examined how they happen. Now we’ll take a look at why they matter. To understand the consequences of cloud leaks for the organizations involved, we should first take a close look at exactly what it is that’s being leaked. Then we can examine some of the traditional ways information has been exploited, as well as some new and future threats such data exposures pose.
Making Copies In our first article on cloud leaks, we took a look at what they were and why they should be classified separately from other cyber incidents. To understand how cloud leaks happen and why they are so common, we need to step back and first take a look at the way that leaked information is first generated, manipulated, and used. It’s almost taken as a foregone conclusion that these huge sets of sensitive data exist and that companies are doing something with them, but when you examine the practice of information handling, it becomes clear that organizing a resilient process becomes quite difficult at scale; operational gaps and process errors lead to vulnerable assets, which in turn lead to cloud leaks.
Breaches, Hacks, Leaks, Attacks It seems like every day there’s a new incident of customer data exposure. Credit card and bank account numbers; medical records; personally identifiable information (PII) such as address, phone number, or SSN— just about every aspect of social interaction has an informational counterpart, and the social access this information provides to third parties gives many people the feeling that their privacy has been severely violated when it’s exposed.
One of the challenges of building and running information technology systems is solving novel problems. That's where frameworks like scrum and agile come in– getting from the unknown to the known with a minimum of frustration and waste. Another challenge is performing known tasks correctly every single time. Here runbooks, checklists, and documentation are your friend. And yet, despite a crowded market for IT process automation offerings, misconfigurations and missed patches are still a problem– and not just a problem, but the root cause of 75-99% of outages of breaches depending on platform. Executable Documentation
Nearly all large enterprises use the cloud to host servers, services, or data. Cloud hosted storage, like Amazon's S3, provides operational advantages over traditional computing that allow resources to be automatically distributed across robust and geographically varied servers. However, the cloud is part of the internet, and without proper care, the line separating the two disappears completely in cloud leaks— a major problem when it comes to sensitive information.
Given the complexity of modern information technology, assessing cyber risk can quickly become overwhelming. One of the most pragmatic guides comes from the Center for Internet Security (CIS). While CIS provides a comprehensive list of twenty controls, they also provide guidance on the critical steps that "eliminate the vast majority of your organisation's vulnerabilities." These controls are the foundation of any cyber resilience platform and at the center of UpGuard's capabilities.
Global in scale, with across the board press coverage, the WannaCry ransomware attack has quickly gained a reputation as one of the worst cyber incidents in recent memory. Despite the scale, this attack relied on the same tried and true methods as other successful malware: find exposed ports on the Internet, and then exploit known software vulnerabilities. When put that way, the attack loses its mystique. But there’s still a lot we can learn from this incident, and we’ve summed up the five most important takeaways to keep in mind going forward.
UpGuard makes a cyber resilience platform designed for exactly the realities that necessitate regulations like New York State Department of Financial Services 23 NYCRR 500. On one hand, businesses need to store, processes, and maintain availability for growing stores of valuable data; on the other, the very conditions for market success open them to attacks from increasingly sophisticated and motivated attackers. Balancing these requirements makes a business resilient, and UpGuard provides the visibility, analysis, and automation needed to thrive while satisfying regulations like NYCRR 500.
Why dashboards? Nobody’s perfect. Success is almost always determined through trial and error, learning from mistakes and course-correcting to avoid them in the future. The length of this cycle— from experiment to result, incorporated into future decisions— determines how quickly a trajectory can be altered, which in turn offers more opportunities to succeed. However, capturing and using hard data to make these adjustments is more difficult than it seems. Dashboards visualize real time data and recent trends, giving people insight into whether their efforts are succeeding— assuming they’re using the right metrics.
So I've finally gotten the go-ahead from higher-ups to join the twenty-first century and use cloud hosting. Now I need to prove that running in AWS is not just easier than maintaining our own farm, but more stable and secure. To do this, I need to be able to monitor each of my instances for configuration drift, ensure that they are properly provisioned, and maintain visibility into dependencies like load balancers and security groups. Fortunately, UpGuard provides all of this information, so even if something were to go wrong I could catch it before someone else does.
UpGuard is proud to announce that security expert Chris Vickery is joining our team as a cyber risk analyst, bringing with him a stunning track record of discovering major data breaches and vulnerabilities across the digital landscape. Chris comes to us from his previous role as a digital security researcher, where among other achievements, he discovered a publicly accessible database containing the voter registration records for 93.4 million Mexican citizens, protecting more than seventy percent of the country’s population from the risk of exposure of their personal information.
A funny thing that’s happened as the digitization of business has sped up in the last ten years is that process cadence has not done well in keeping up. Regulatory compliance standards often use quarters, or even years, as audit intervals, and in unregulated industries that interval can be yet longer. But in the data center, changes happen all the time, changing the risk profile of the business along with it. Determining which changes are the root cause of a problem can be the difference between fixing it and having it happen again.
Going from nothing to automation using one of the many tools available can be a daunting task. How can you automate systems when you’re not even 100% sure how they’ve been configured? The documentation is months out of date and the last guy to configure anything on that box has since left the company to ply his trade somewhere that will more fully appreciate his Ops cowboy routine.
When it comes to measuring success for your team, finding a reliable and accurate means for doing so can be more difficult than it might appear. UpGuard's VP of Product, Greg Pollock, wrote about his insights into instituting such metrics and understanding the difference between "behavior" and "results."
Few corporate rivalries are as legendary as these two enterprise contenders; admittedly, there have been more than a fair share of comparisons pitting the pair against each other over the last century. So we're offering a twist to the traditional cola challenge: how do Pepsi and Coke stack up in terms of cyber resilience? Read more to find out.
Leading security researchers have confirmed that the U.S. Air Force (USAF) suffered a massive data breach leading to the exposure of sensitive military data and senior staff information. Here's what you need to know about this latest security failure involving the U.S. government.
On February 18th, 2017, Google security researchers discovered a massive leak in Cloudflare's services that resulted in the exposure of sensitive data belonging to thousands of its customers. Here's what you need to know about the Cloudbleed bug and what can be done to protect your data.
Managing complexity in heterogeneous infrastructures is a challenge faced by all enterprise IT departments, even if their environments are relegated to *NIX or Windows. In the case of the latter, UpGuard's new RSoP/GPO scanning capability streamlines remediation and compliance efforts by enabling Windows operators to easily scan and monitor the disparate security configurations of their Active Directory (AD) instances and Windows endpoints.
As the two leading mobile telecom providers in the U.S., AT&T and Verizon are perpetually at war on almost all fronts—pricing, quality of service, network coverage, and more. But with data breaches at an all time high, security fitness may soon become a critical factor for consumers evaluating wireless service providers. Let's find out how the two compare when it comes to measures of enterprise cyber resilience.
Arby's announced last week that its recently disclosed data breach may impact 355,000 credit card holders that dined at its restaurants between October 2016 and January 2017. Are fast food vendors resilient enough to sustain future cyber attacks and—more importantly—protect consumers against online threats?
Booksellers and electronics retailers aren't the only brick-and-mortar businesses challenged by the rise of highly agile, online-only competitors—traditional retail banking institutions also face stiff competition from Internet-based consumer banking upstarts. But are these born-in-the-cloud banks and financial services offerings safer than their traditional counterparts? Let's take a look at the leading online banks to see if they're equipped to handle today's cyber threats.
On October 21st, 2016, DNS provider DYN suffered from the largest DDoS attack in history, leaving much of the Internet inaccessible to Europe and North America. The unprecedented event saw cyber attackers orchestrating swathes of Mirai malware-infected IoT and connected devices to perform DNS lookup requests from tens of millions of IP addresses—impressive automated hacking, but hardly sophisticated: the malware gained privileged access by using public, default passwords. Are IoT companies doing enough to secure their "things" against nefarious actors?
With all the conveniences of modern air travel—mobile check-ins, e-gates, in-flight wifi, and more—it's easy to assume that the world's leading airlines have addressed the inherent cyber risks of digitization. But the safety of in-air passengers is just one aspect of airline customer security; are these companies doing their best to protect customers against online security compromises? Let's take a look at the world's leading airlines to find out.
UpGuard's Events systems provides a communication hub to send the data that UpGuard gathers to external systems. Integration between technologies is critical to high performing digital businesses, and UpGuard's Events system provides a simple way to get the information you need the places where you need it.
Every year, leading tech/gadget vendors descend upon the world's largest consumer electronics show in an exuberant display of product design wizardry, cutting edge innovation, and of course—a requisite dose of ridiculousness. This year's focus was on connected cars and VR, with IoT device and wearable tech manufacturers out in full force, per the usual. Let's see how good the best of CES 2017 are at protecting customers against cyber attacks.
2016 was arguably the year when cybersecurity events entered into the global stream of consciousness, from the sabotage of national banks to the hacking of elections. And though we're barely into 2017, the breach announcements have already begun: on January 3rd, a data breach was discovered involving the sensitive data of health workers employed by the US military's Special Operations Command (SOCOM). An increase in government-related security incidents is one of our top predictions for 2017—here are 11 other cybersecurity predictions for the new year.
Retailers aren’t the only ones benefiting from increased sales around the holidays — scammers and hackers are seeing their own bump in business.
Last week, leading online education provider Lynda.com announced that its database of over 9.5 million accounts were compromised in a recent data breach. With the education space increasingly moving to the internet, are underlying technology providers doing their best to provide a safe learning environment to customers?
AAA predicts that a record number of Americans will be taking to the skies and roads this holiday season—103 million between Dec. 23-Jan. 2, a 1.5% increase over 2015. 57% of these travel reservations—that's 148 million travellers—booked online. Airfare/hotel/car rental comparison websites are an increasingly popular way to book travel these days, but how good are they at protecting their users' data? Let's take a look at the top 8 online travel aggregators' CSTAR ratings to find out.
As the holiday season approaches, the world’s fraudsters, scammers, and blackhats can take no small measure of yuletide cheer from their work in 2016 - a banner year for hacking. Call it the dark side of technological innovation, an equal and opposite reaction to the increasing breadth and efficiency of the internet. 2016 was a record-breaking year for data breaches, powerfully affecting the spheres of life like never before - from a presidential election rife with electronic intrigue, to a business landscape increasingly shaped by hacking. But if there is a silver lining to be found, looking at the most damaging data breaches to actually occur in 2016, it is the depressing fact that some of the worst hacks exploited well-known vulnerabilities which could’ve been easily prevented.
At the start of 2015, Gartner predicted that DevOps adoption would evolve from a niche to mainstream enterprise strategy, resulting in 25% of Global 2000 companies drinking its Kool-Aid by 2016. And while the hype—tempered by the realities of implementation—has more or less died down as of late, the methodology's value to enterprises is no longer a debatable matter. Here are some highlights from 2016 detailing how the year panned out for DevOps and its practitioners.
On November 29th, after a high-profile year of published leaks and hacks targeting the Democratic Party, Wikileaks struck once more, albeit against an unexpected target: HBGary Federal, a now-defunct government contracting affiliate of the eponymous cybersecurity firm. It was not a name unfamiliar to online observers; in 2011, HBGary Federal CEO Aaron Barr had boldly claimed to have identified the leading members of internet hacking collective Anonymous, drawing attention from federal investigators eager to identify and arrest the culprits behind DDoS attacks in support of Wikileaks.
Vulnerability assessment is a necessary component of any complete security toolchain, and the most obvious place to start for anyone looking to improve their security. Ironically, starting with vulnerability assessment can actually degrade an organization's overall defense by shifting focus from the cause of most outages and breaches: misconfigurations.
Once upon a time, video gaming was strictly an offline, console-based affair. Even PC-based titles were relegated to the safe confines of the player's local desktop machine. The arrival of affordable and ubiquitous high-speed internet transformed gaming into a highly interactive online activity; these days, the online component is an integral part of gameplay. But are gaming vendors doing enough to protect users against today's cyber threats?
It’s hard to believe Thanksgiving is almost here, and with it, the frenzy of the holiday shopping season fast approaches. Whether you are camping out overnight for “Black Friday” bargains, or waiting for the online deals of “Cyber Monday,” the odds are you are more nervous than ever about the safety and security of your financial information against holiday scammers. At least, so indicate the results of UpGuard’s survey of over 1,200 respondents in November 2016. The survey finds that 95% of consumers are to some degree concerned about the security of their information online, and more than half would break with their favorite brands if they knew their information was at risk; full survey results can be viewed here.
Containers are all the rage these days, and for good reason: technologies such as Docker and CoreOS drastically simplify the packaging and shipping of applications, enabling them to scale without additional hardware or virtual machines. But with these benefits come issues related to management overhead and complexity—namely, how can developers quickly achieve visibility and validate configurations across distributed container clusters? The answer is with UpGuard's new etcd monitoring capabilities.
Policies are an important part of how UpGuard works, but in large implementations, policy bloat can make managing different groups of devices unwieldy. To combat this, UpGuard has implemented policy variables and variable override options in version 2.29 to allow people to better use a single policy across multiple groups. Out-of-the-box policies don’t always offer the necessary flexibility to adjust to real environments, but with UpGuard’s policy variables and overrides, administrators can adjust their expected configurations to apply to multiple systems or environments, taking into account their differences, and allowing them to focus on maintaining the configurations they care about.
Several of the world's leading airlines are getting the travel season off to a rocky start: last week, American Airlines and Alaska Airlines resolved a technical glitch causing reservation/check-in and delays across 15 flights. With the holidays approaching, can airlines weather mounting losses caused by their aging computer systems and IT infrastructures?
Your website's perimeter security couldn't be any better: sitewide SSL and DMARC/DNSSEC are enabled, software versions aren't being leaked in your headers, and all other resilience checks are green. But how secure is your mobile app? Unfortunately, like most companies, you've outsourced mobile app development to a third-party agency and have little visibility into their security practices. And if your app supports Facebook and Google sign-ons, you may be in trouble: a security team recently discovered an OAuth 2.0 flaw that's already left over a billion apps exposed.
Last month, around 1.3 million records belonging to over half a million blood donor applicants were breached when the Australian Red Cross' web development agency Precedent left a database backup exposed on a public website. The venerable non-profit has since taken responsibility and apologized for the incident, despite being the fault of a third party agency. If anything, the mishap serves to illustrate that resilience—not stronger cybersecurity—is the key enabler of safe healthcare digitization.
Recently, New York’s Department of Financial Services and Gov. Andrew Cuomo released their long-awaited proposal for cybersecurity regulations regarding banking and financial services companies. The proposal, if implemented, would be the first mandatory state-level regulations on cybersecurity and promises to deliver sweeping protections to consumers and financial institutions alike. In Gov. Cuomo’s words: "This regulation helps guarantee the financial services industry upholds its obligation to protect consumers and ensure that its systems are sufficiently constructed to prevent cyberattacks to the fullest extent possible."
Government/politics, and cybersecurity—these topics may seem plucked from recent U.S. election headlines, but they're actually themes that have persisted over the last decade, reaching a pinnacle with the massive OPM data breach that resulted in the theft of over 22 million records—fingerprints, social security numbers, personnel information, security-clearance files, and more. Last month, a key government oversight panel issued a scathing 241 page analysis blaming the agency for jeopardizing U.S. national security for generations. The main culprit? Lack of visibility.
This is not an opener for a sex-ed public service announcement, but in fact the million-dollar question for today's enterprise CISOs and CROs: which vendor in the supply chain will prove to be the riskiest bedfellow? With 63% of all data breaches caused directly or indirectly by third party vendors, enterprise measures to bolster cyber resilience must now include the evaluation of partners' security as part of a broader cyber risk management strategy. Easier said than done: most third parties are unlikely to admit to their security shortcomings, and—as it turns out—even if they did, most firms wouldn't believe them anyway.
Last week, leading global ERP vendor SAP was busier than usual in the patch department: it released a record amount of closed issues per month and addressed 48 vulnerabilities—one of them an authentication bypass vulnerability previously left unaddressed for 3 years. Given how mission-critical ERP systems are for centralizing business operations these days, is it safe to assume that ERP vendors are serious about their customers' security? Let's take a look at the leading solution providers in this category to find out.
DevOps has proven to be more than just an industry buzzword, but as the term starts to gain widespread use in modern software development parlance, an emerging successor has begun to take hold: Rugged DevOps, also known as SecDevOps/DevSecOps. RSA Conference (RSAC) 2016 dedicated a track to the emerging practice earlier this year, so it's likely to become as prevalent as its predecessor by next year's end—especially since RSAC plans to highlight the methodology again in 2017.
As enterprises resign themselves to the sobering fact that security compromises are unavoidable, another resulting inevitability is coming into play: ensuing lawsuits and class actions spurred by data breaches and customer data loss. Last week, the Republican presidential nominee's hotel chain and the U.S.' third largest search engine came to terms with this reality. What does the future hold for organizations facing inexorable data breaches coupled with the spectre of resulting litigation?
Does filling out an online survey in exchange for a few bucks sound too good be true? For ClixSense users, this is turning out to be the case: last week, the leading paid-to-click (PTC) survey firm admitted to a massive data breach involving virtually all of its users' accounts—roughly 6.6 million records in total. With so many giving in to the allure of easy money, PTC firms should be on top of securing privileged data of survey takers they're bankrolling. Let's find out how the top 5 compare when it comes to fulfilling this critical responsibility.
For Spotify CEO Daniel Ek, the goal for the rest of 2016 should be simple: don’t rock the boat. The Swedish music streaming service, which is widely expected to go public late next year, is already locked in enough significant conflicts to occupy most of Ek’s waking hours.
Essential to enterprise security, or a waste of time? Security professionals' opinions regarding penetration testing (pen testing) seem to fall squarely on either side of the spectrum, but—as with most IT practices—its efficacy depends on application and scope. And while pen testing alone is never enough to prevent data breaches from occurring, information gleaned from such efforts nonetheless play a critical role in bolstering a firm's continuous security mechanisms.
Leading cloud storage provider Dropbox is arguably having its worst month since launching back in 2007—but with over half a billion users, it's somewhat surprising that serious issues have only begun to surface between the ubiquitous service and the people trusting it with their files. First, in a recent announcement reminiscent of LinkedIn's latest data breach fiasco, Dropbox announced several weeks ago that over 68 million emails and passwords were compromised in a previously disclosed 2012 data breach. And now, security experts are criticizing the company for misleading OS X users into granting admin password access and root privileges to their systems. What recourse do consumers have when cloud services providers "drop the box" on security, or even worse—when their actions directly jeopardize the users they're supposed to protect?
As election year moves into the final stretch, news coverage wouldn't be complete without another mention of a politically motivated data breach or cybersecurity incident. Of course, several months ago the DNC's emails were compromised by hackers, resulting in the theft and exposure of 19,000 hacked emails and related documents. This pales in comparison, however, to the recent FBI announcement of data breaches involving both Illinois and Arizona's voter registration databases. If the controls critical to securing election systems continue to fail, how can participants in the democratic process be sure that their votes won't be hijacked?
When you use the internet, your computer has a conversation with a web server for every site you visit. Everything you submit in a form, any data you enter, becomes part of that conversation. The purpose of encryption is to ensure that nobody except you and the server you’re talking to can understand that conversation, because often sensitive information such as usernames and passwords, credit card data, and social security numbers are part of that conversation. Eavesdropping on these digital conversations and harvesting the personal information contained therein has become a profitable industry. But encryption isn’t an on/off switch. It requires careful configuration. In other words, the padlock isn’t always enough.
Organizations often regard cybersecurity as a series of barricades protecting the inner workings of the data center from attacks. These barricades can be hardware or software and take actions such as blocking ports, watching traffic patterns for possible intrusions, encrypting communications and so forth. In practice, these measures are only part of a comprehensive cybersecurity strategy, and by themselves will do little to bolster the overall resilience of an organization. But thoroughly tested and streamlined procedures within IT operations can prevent the most common attack point on the internet: misconfigurations.
Our new digital reputation scan provides a fast and easy way to get a risk assessment for your (or any) business. We look at the same stuff that other external risk assessment tools do– SSL configurations, breach history, SPF records and other domain authenticity markers, blacklists and malware activity. We're happy to offer this service for free, because that information is public and we believe that it's what's inside that really matters. Most of the elements we include in our external assessment are not controversial, but one resulted in arguments lasting several days: the CEO approval rating. In selecting which checks would go into our risk assessment, we here at UpGuard looked at similar site assessment tools and selected only the checks that we thought were relevant to our goal: risk assessment, which overlaps with, but isn't identical to, website best practices. Plus, there are already fine tools for performing those best practices functions, so why duplicate them? We also intentionally omitted checks we thought would not be significant for calculating the risk of data breach and the damage it would cause.
If you regularly use a computer, chances are you spend at least part of your time reading internet news. If you have a subscription, you might even log in and enter your payment info. But how secure are news sites? Here at UpGuard, we took a look at six of the top news media sites on the internet to see how their security stacked up. Many big names had low scores, while a few did very well. What does this mean for the average online news reader?
Years ago, our company set out with a mission to solve a problem of trust between software developers and admins. We knew the problem existed firsthand—at our old jobs in a large Australian bank, one of us had been developing software and the other managing operations. We had a disagreement about how to proceed with a deployment. Dev insisted everything was ready but Ops pushed back, saying there was not enough information to trust the changes about to take place. We each saw merit in the other's argument and knew this had to be happening everywhere, so we left our 9-to-5's to build a solution.
Whether you’re deploying hundreds of Windows servers into the cloud through code, or handbuilding physical servers for a small business, having a proper method to ensure a secure, reliable environment is crucial to success. Everyone knows that an out-of-the-box Windows server may not have all the necessary security measures in place to go right into production, although Microsoft has been improving the default configuration in every server version. UpGuard presents this ten step checklist to ensure that your Windows servers have been sufficiently hardened against most attacks.
Online business has made traveling for events like the Olympics easier and faster by putting everything from airlines to hotel rooms at the fingertips of anyone with a smartphone and an internet connection. But transferring your personal and financial data across the internet is only as secure as the companies on the other end make it, and from site to site there can be a vast difference of risk. The differences don't necessarily come where you'd expect either, with many popular organizations having middling to low security practices. How can you know who to trust?
For believers of the old adage love of money is the root of all evil, it comes as no surprise that most data breaches are carried out for financial gain. Verizon's 2016 Data Breach Investigations Report (DBIR) reveals that the 75 percent of cyber attacks appear to have been financially motivated; suffice to say, it's not surprising that ATMs are constantly in the crosshairs of cyber attackers.
Facebook's Mark Zuckerberg, Google's Sundar Pichai, Twitter's Jack Dorsey, what do these three high-flying CEOs have in common? Their social media accounts were all hijacked recently due to bad password habits. To be fair, these breaches occurred indirectly as a result of triggering events—for example, the massive Linkedin data breach led to Zuckerberg's Twitter account getting hijacked, but one thing is for certain: the executive leadership of the world's leading tech companies are as prone to password management mishaps as the rest of us. And—as the latest LastPass vulnerability serves to illustrate—password management solutions may no longer be a safe alternative for memorizing passwords.
In 2015, organizations spent over $75 billion on cybersecurity. That’s a lot of money. But 2015 also saw a rise in successful cyber attacks, costing companies hundreds of billions of dollars in damages, loss and other related expenditures. Did all of the security software and hardware purchased with that $75B fail to do its job? Today's landscape requires more than just a collection of isolated products handling specific tasks—it needs an integrated ecosystem dedicated to overall resilience.
Tuesday July 12th is online retail giant Amazon’s self-styled “Prime Day,” and the potential deals mean a surge in online shopping. Designing systems and applications to handle the amount of traffic a site like Amazon sees day to day, much less during promotions like Prime Day, can be difficult in and of itself. Throw in the complexity of cybersecurity and it becomes clear why so many online retailers have trouble keeping up. Amazon itself has relatively good security, but what exactly does that mean for customers? We’ll look at what measures Amazon has in place, what they mean, and a few simple steps to tighten security even further.
You've seen enough Hollywood blockbusters about casino heists to know that gambling institutions are constantly in the crosshairs of attackers—online and off. In the digital realm, however, better malware tools and access to deep funding make today's cyber criminals more than a bad movie, especially when lucrative payloads are for the taking.
Since 2000, the nonprofit Center for Internet Security (CIS) has provided the public service of creating and distributing hardening guidelines for common operating systems and applications. Alongside documents describing what configuration to check, how they should be configured, and how to fix them, CIS also offers a software solution that can analyze a system for compliance with the CIS benchmarks. Despite those resources, and their criticality for information security, the fact remains that becoming and staying secure is a persistent problem. Why is system hardening so hard?
There are really only a few ways to get funding: an individual such as a venture capitalist or billionaire, a partnership or strategic investment by a corporation or state agency and getting a large number of people to give you a very small amount of money. Crowdfunding websites claim to offer a platform for the latter, giving inventors, artists and small businesses a method by which to propel themselves on the merits (or popularity) of their ideas, without needing inside connections or extensive business acumen as the other methods usually require. But because all of the transactions involved in crowdfunding take place on the internet, cybersecurity should be a number one concern for both users and operators of these websites. We used our external risk grader to analyze 7 crowdfunding industry leaders and see how they compare to each other and other industries.
Cybersecurity news items are usually one of two things: your "run-of-the-mill" data breach announcement or vulnerability alert, usually software-related. This week's Symantec fiasco falls into the latter bucket, but it isn't your average vulnerability alert. In fact, this is the one that most enterprise security professionals have been dreading and horrified to hear: that your security defenses are not only ineffective—they can be used against you by attackers.
No, we aren't talking about your burger-inhaling operator passing out on the job, leaving your precious IT assets unattended. You've probably guessed that we're referring to the latest Wendy's data breach announcement: on June 9th, the international fast food chain disclosed that its January 2016 security compromise was, in fact, a lot worse than originally stated—potentially eclipsing the Home Depot and Target data breaches.
A few days ago, Taiwanese computer manufacturer Acer disclosed that "a flaw" in their online store allowed hackers to retrieve almost 35,000 credit card numbers, including security codes, and other personal information. Most of the major personal computer retailers have online stores like Acer's, allowing people to buy directly from the manufacturer, rather than through a reseller like Amazon. But how secure are these digital outlet stores, and what are the chances that if you use them you'll end up like Acer's customers? We examined seven industry leaders with our external risk grader to see how they stacked up, and unfortunately, Acer wasn't alone in its security practices.
Glassdoor's 2016 Employees' Choice Awards Highest Rated CEO List includes household names like Marc Beniof, Mark Zuckerberg, and Tim Cook—CEOs of companies that also score high marks for strong security. Is there any correlation between a company's cyber risk profile and its CEO employee approval rating?
The term cyber risk is often used to describe a business’ overall cybersecurity posture, i.e., at how much risk is this business, given the measures it has taken to protect itself. It’s often coupled with the idea of cyber insurance, the necessary coverage between what a company can do security-wise, and the threats it faces day in and day out. Cybersecurity used to belong exclusively in the realm of Information Technology, one of many business silos that while important, was only a small piece of the business and as such, often delegated to a C-level manager who interfaced with other executives as necessary. Today’s businesses have outgrown this model, as what used to be considered information technology has grown to encompass business itself, permeating every aspect of it, governing its speed, its range, its possibilities. As a CEO or CFO, the way your business handles information technology and begins to foster cyber resilience, reflects the way you think about your company and its place in the contemporary market.
A routine fill-up at the local gas station or ATM withdrawal might cost you dearly these days. With the recent surge in ATM and gas pump skimming attacks, you certainly wouldn't be alone—in fact, the odds are one in three that you'll fall victim to identity theft once your financial data is swiped. Is there any hope in an increasingly hostile landscape rife with external threats?
It’s 2016 and you have a cell phone. You also probably pay your cell phone bill online or through an app. Telecom companies handle the world’s communication and part of what that entails is securing that communication to guarantee privacy and integrity to their customers. Here at UpGuard, we scanned ten of the major telecom corporations with our external risk grader to see how their web and email security measured up. These are big money companies with many moving parts, but we’re focusing on the primary web presence a person would consider, for example www.att.com. Turns out there’s some good news and some bad news... depending on which carrier you use.
Yesterday you might have read about Facebook founder and user Mark Zuckerberg’s social media accounts getting “hacked.” Hacked is maybe not the right word here, since many people believe Zuck’s password was among the 117 million leaked LinkedIn passwords recently posted online. If this is true, it means that Zuckerberg used the same password for multiple websites, allowing the damage done by the LinkedIn hack to spread into other areas. If you have or want a job, chances are you also have a LinkedIn account, and if you had one back in 2012, it was probably one of the compromised accounts from that incident. Do you still use that password anywhere? Our 9 step password security checklist will help you secure your accounts, whether you’re a billionaire CEO or just someone who likes to post funny cat videos.
The North American Electric Reliability Corporation (NERC) creates regulations for businesses involved in critical power infrastructure under the guidance and approval of the Federal Energy Regulatory Commission (FERC). A few of these, the Critical Infrastructure Protection (CIP) standards, protect the most important links in the chain and are enforced under penalty of heavy fines for non-compliance. Many of the CIP standards cover cybersecurity, as much of the nation’s infrastructure is now digital. To prove compliance with CIP standards, companies must have a system of record that can be shown to auditors to prove they have enacted the required security measures to protect their cyber assets.
Chances are, if you've any semblance of a professional life, you probably have a corresponding LinkedIn account to show for it. And if that's the case, your data was likely stolen in the massive 2012 data breach, now thought to be more expansive than originally posited. Last week, the world's largest professional social network sent out a notice stating that its initial announcement of 6.5 million stolen passwords turns out to be quite off—by about 110.5 million.
The NERC CIP v5 standards will be enforced beginning in July of this year, but version 6 is already on the horizon. Previously, we examined the differences between v3 and v5, and we saw how the CIPs related to cybersecurity were evolving. This pattern continues in v6, with changes coming to some of the cyber CIPs and the addition of standards regarding “transient cyber assets and removable media,” but the major changes in v6 have to do with scope-- which facilities are required to comply, and at what level they must comply: low, medium or high impact. We’ll examine some of the differences coming up in CIPv6 and what they will mean for the industry.
While it’s not certain that society would become a zombie apocalypse overnight if the power grids failed, it is hard to imagine how any aspect of everyday life would continue in the event of a vast, extended electrical outage. Part of what makes electrical infrastructure resilient against these types of events are the North American Electric Reliability Corporation (NERC) regulatory standards, especially the Critical Infrastructure Protection (CIP) standards, which provide detailed guidelines for both physical and cyber security. The CIP standards evolve along with the available technology and known threats, so they are versioned to provide structured documentation and protocols for companies to move from one iteration of the standards to the next. But the jump from version 3 to version 5 involves many new requirements, so we'll look at some of the differences between the two and what they mean for businesses in the industry.
Salesforce.com's recent day-long outage—what many tech journalists have been referring to as "Outage #NA14"—may actually end up costing the firm $20 million, according financial services firm D.A. Davidson's estimates. The untimely incident occurred just as the company was gearing up to report its Q1 earnings; luckily, $20 million is a drop in the bucket compared to $1.92 billion, Salesforce.com's best first quarter yet. This may be enough to pacify Wall Street analysts, but can the world's largest business SaaS provider sustain another outage of similar proportions or greater?
Whether you’re running Microsoft’s SQL Server (soon to run on Linux) or the open source MySQL, you need to lockdown your databases to keep your data private and secure. These 11 steps will guide you through some of the basic principles of database security and how to implement them. Combined with a hardened web server configuration, a secure database server will keep an application from becoming an entry point into your network and keep your data from ending up dumped on the internet. When provisioning a new SQL server, remember to factor security in from the get-go; it should be a part of your regular process, not something applied retroactively, as some key security measures require fundamental configuration changes for insecurely installed database servers and applications.
The Mac is undeniably the platform of choice for designers and artists, and for good reason. Apple's designers—and Steve Jobs in particular, according to legend—took special care to make even the first Macs superior to PCs in ways that would matter to those in visual fields. Font selections and type rendering on computers, as one example, were decidedly crude prior to the Macintosh. It's a minor detail for the number cruncher or spreadsheet user, but can mean everything to those in the arts. For that reason and others like it, Apple has enjoyed the unflinching endearment of a certain subset of users.
Arguably--in that people literally argue about it--there are two types of web servers: traditional servers like Apache and IIS, often backhandedly described as “full-featured,” and “lightweight” servers like Lighttp and nginx, stripped down for optimum memory footprint and performance. Lightweight web servers tend to integrate better into the modern, containerized environments designed for scale and automation. Of these, nginx is a frontrunner, serving major websites like Netflix, Hulu and Pintrest. But just because nginx slams Apache in performance doesn’t mean it’s immune from the same security problems the old heavyweight endures. By following our 15 step checklist, you can take advantage of nginx’s speed and extensibility while still serving websites secured against the most common attacks.
A new high severity vulnerability in the OpenSSL protocol was announced today that could allow an attacker to cause memory corruption in devices handling SSL certificates. The vulnerability was caused by a combination of bugs, one a mishandling of negative zero integers, and the other a mishandling of large universal tags. When both bugs are present, an attacker can trigger corruption by causing an out-of-bounds memory write.
Last week the Australian government announced a new cybersecurity initiative that will cost upwards of AU$240 million and create 100 “highly specialized” jobs. This comes on the heels of Obama’s February announcement of the Cybersecurity National Action Plan, which hopes to establish a cybersecurity committee and create a 3.1 billion dollar “modernization fund.” With business and communications now done almost entirely online, it makes sense that governments are taking cybersecurity seriously, but what does it mean for the state to establish a cybersecurity presence and how will these initiatives ultimately play out? We’ll look at the details of both plans and how they align with their government’s cybersecurity actions, as well as their potential impact on citizens.
You’ve hardened your servers, locked down your website and are ready to take on the internet. But all your hard work was in vain, because someone fell for a phishing email and wired money to a scammer, while another user inadvertently downloaded and installed malware from an email link that opened a backdoor into the network. Email is as important as the website when it comes to security. As a channel for social engineering, malware delivery and resource exploitation, a combination of best practices and user education should be enacted to reduce the risk of an email-related compromise. By following this 13 step checklist, you can make your email configuration resilient to the most common attacks and make sure it stays that way.
Putting a website on the internet means exposing that website to hacking attempts, port scans, traffic sniffers and data miners. If you’re lucky, you might get some legitimate traffic as well, but not if someone takes down or defaces your site first. Most of us know to look for the lock icon when we're browsing to make sure a site is secure, but that only scratches the surface of what can be done to protect a web server. Even SSL itself can be done many ways, and some are much better than others. Cookies store sensitive information from websites; securing these can prevent impersonation. Additionally, setting a handful of configuration options can protect both your full website presence against both manual and automated cyber attacks, keeping your customer’s data safe from compromise. Here are 13 steps to harden your website and greatly increase the resiliency of your web server.
You’ve spent months with your team designing your company’s security strategy-- you’ve demoed and chosen vendors, spent money, and assured your users that this investment will pay off by keeping their business safe. The next thing you know, the very software you’ve put in place to protect your data is exposing it instead. This nightmare scenario has turned into reality for some companies when major security software was compromised or had fatal flaws that exposed sensitive information to unknown third parties. Just because you sell security doesn’t mean you always practice it.
That’s a nice new Linux server you got there… it would be a shame if something were to happen to it. It might run okay out of the box, but before you put it in production, there are 10 steps you need to take to make sure it’s configured securely. The details of these steps may vary from distribution to distribution, but conceptually they apply to any flavor of Linux. By checking these steps off on new servers, you can ensure that they have at least basic protection against the most common attacks.
Are you filing your taxes online this year? As e-filing and internet connected tax software becomes more and more standard, the security of the sites accepting your sensitive information becomes more and more important. You've probably heard about some of the various data breaches facing the tax industry, including one of the IRS in May of 2015, potentially exposing hundreds of thousands of tax records. UpGuard's external risk grader measures the security of a company's internet presence. We ran ten tax-related websites through to see how they stacked up and the results are interesting. Perhaps most interesting of all, IRS.gov received a rare perfect score of 950 out of 950. Tax software websites such as TaxSlayer fared well too. But as we'll see, the external information is just the tip of the iceberg.
There's no arguing that internet retailers have it tough these days: web server vulnerabilities, expiring SSL certificates, PCI DSS compliance, and a host of other issues keep the most vigilant of etailers on their toes—all this, mind you, against a harsh backdrop of increasing cyber threats. Even still, a handful manage to slip up when it comes to the most basic security measures, putting both their infrastructures and the data security of customers at risk. The following is a list of 11 online retailers who should know better.
People commonly use the phrase “security through obscurity” to refer to the idea that if something is “hidden” or difficult to find, it becomes more secure by virtue of other people not knowing it’s even there to be exploited. But in reality, security through obscurity usually means that the only people who find obscure resources are the people looking to exploit them for a way in. This is why visibility, rather than obscurity, increases security. Our website risk grader provides people with an easy way to view a website's security rating by offering visibility into their internet-facing footprint. This also allows businesses to monitor their own improvement over time.
Another regular season is underway as teams—fresh from spring training—dive head first into a sea of possibilities: will the Cubs win a World Series this year? How about those Mariners? Who will be this year's Hall of Famers? For fans, another question is increasingly becoming the subject of bar room chatter: which team will be hacked this season?
Your medical records live in a database or file system on servers somewhere, on someone’s network, with someone’s security protecting them. A recent PBS article about cyber security in the healthcare industry reports that over 113 million medical records were compromised in 2015. Medical records, perhaps even more than financial data, are the epitome of sensitive, private data, yet the healthcare industry has reported breach after breach, with over a dozen separate breaches already logged in March of this year.
When it comes to Flash, the only thing you hear more about than its ubiquity are its problems. Despite denunciations from some of technology’s biggest names, Adobe’s Flash player still seems to be everywhere. For almost ten years now, people have been dealing with the security warnings, critical updates and browser incompatibilities for which Flash is infamous. Yet even now, 0-day exploits of Flash’s seemingly unending vulnerabilities threaten users as third-party Flash ads on otherwise trusted websites are used to breach security.
In the last few years, sports betting websites like DraftKings and FanDuel have exploded in popularity and controversy. Anyone who watched last year’s NFL season shouldn’t be surprised that those two sites alone spent over $200M on national television advertising in 2015, amounting to around 60,000 commercials. At the same time, betting sites have been in the news due to their questionable legality and the lawsuits being brought against them from various parties. With March Madness in full effect, people are turning to online gambling sites to place their bets. Aside from the increasing legal resistance these companies face, should users be concerned about the security of sharing their information with these sites? As it turns out, it depends on the site.
Cyber attackers are, above all else, opportunists—malware and viruses require time and resources to develop and are therefore created with the greatest returns in mind. In terms of operating systems, Windows typically gets a bad rap for security—the price of popularity, as it were. But as other OS platforms have whittled down Windows' market share in recent years, cyber attackers have had an increasingly broad playing field for exploitation.
If you're one of its 140 million cardholders around the globe, American Express wants you to know that your data is safe. The data breach recently announced by the U.S.' second largest credit card network reportedly involved a partner merchant and not Amex itself. However, if you're one of the customers whose credit card and personal information was stolen, the difference is negligible.
The usability of software is usually defined in relation to the efficiency with which people can manipulate it. Is it time-saving, intuitive, likable? But often overlooked is how usability indirectly affects security, especially when dealing with enterprise software. The basic thesis is this: an application that's easier to use, easier to configure and manage both initially and over time, will also be more resilient than an application that's difficult or frustrating, even if the two have identical feature sets. This is because in practice, software is rarely, if ever, used in an ideal fashion.
First circulated in 2009, the CIS Critical Controls are used by both the U.S. and U.K. governments as the preeminent framework for securing critical infrastructures. Consisting of 20 security controls that cover areas from malware defense to incident response and management, the CIS Critical Controls offers a prioritized set of security measures for assessing and improving a firm's security posture. Though not a cybersecurity panacea, the controls help to address the vast majority of security issues faced by organizations today.
Amazon.com suffered a glitch today leaving its website inaccessible for approximately 13 minutes. Seem like a paltry number? Only if these lost minutes aren't translated to sales revenue losses. And while outages with the company's AWS cloud computing offering are not uncommon, Amazon's online retail division—as well as all retailers that transact online—have much at stake literally every minute their websites stay up—or go down.
According to the recently released 2016 Data Breach Investigations Report (DBIR) digest, produced annually by Verizon to help educate the industry, companies spent hundreds of billions of dollars last year as a result of cybersecurity incidents.
Chances are you’ve browsed to an online IT community looking for information about a technology. But taking full advantage of them means understanding how they work and what they can do for you. Interaction with a tech community usually happens for one of three reasons:
We are students from National University of Singapore on a one year entrepreneurship program that brought us to Silicon Valley, where we have the opportunity to intern at a startup while taking courses at Stanford. Our primary reason for choosing UpGuard was the excitement of working in a fast-paced DevOps environment with experts and solving challenging, large-scale enterprise problems. The product enables complete visibility into IT infrastructure, tracks and manages change, and ultimately helps prevent downtime and breaches. Our time at UpGuard has not only contributed to our education, it has been nothing short of amazing.
The high likelihood of falling victim to security compromises has led firms to adopt more digitally resilient strategies. Unfortunately, these measures do not address the ominous threat of natural disasters looming on the horizon. A myriad of business continuity solutions exist to mitigate the effects of natural disaster-induced downtime, but there's no telling at the end of the day how digitally-dependent organizations will fare when catastrophic events of unprecedented proportions occur.
RSA 2016 is underway with the tagline "Where The World Talks Security," but for the most part it’s just that—a lot of talk. Attendees, speakers and vendors have come from all over the world to share insight and new products with their security-minded peers, and there will certainly be a few novel takeaways as in years past, but who is serious about security and who is just putting on a show for potential clients and investors?
On February 28th 2016, “grey-hat security research group” TeaMp0isoN breached Time Warner Cable’s Business Class customer support portal with a SQL injection attack, defacing the site and snatching a database dump with more than 4,000 records including usernames, email addresses and (encrypted) passwords.
The Sony Pictures hack is turning out to be quite an intricate saga of misdeeds. From the tools and methods used to the ever-expanding sphere of destruction attributed to the Lazarus Group, ongoing forensics are shedding light on strikingly similar advanced persistent threat (APT) campaigns targeted at various other media, finance, and manufacturing firms around the global. And while the sophistication of the attackers' tooling and methods is certainly to be reckoned with, the apparent emergence of DevOps-like enablement in the digital underworld is arguably greater cause for concern.
Fortune recently published an article listing the airlines with the best in-flight wifi service. Coming in at the top of the list with the most onboard wifi connections globally were 3 American carriers: Delta, United, and American Airlines, respectively. But what defines best? Security is clearly not part of the equation, as one journalist famously discovered last week on a domestic American Airlines flight. But then again, if we're talking about wifi and commercial aircraft, all airlines get a failing grade.
We've all heard the saying: hindsight is 20/20. This applies to many scenarios but is seldom the case when it comes to IT security: most organizations develop shortsightedness when it comes to data breaches—even those that may be happening right under their noses. Like a vehicle's side and rearview mirrors, retrospective security improves visibility by eliminating blind spots using past trends and historical data.
Buffer overflowing—or the stuffing of more data into a block of memory than allocated—has been one of the more common security vulnerabilities to be exploited in recent years. Last week Google and RedHat security researchers discovered a particularly distressing buffer overflow vulnerability in one of the key underpinnings of the internet: the glibc DNS bug. And while the glibc team has provided a fix for most Linux distros, it's questionable whether the flaw can be eradicated any time soon, especially given the ubiquity of Linux systems and the GNU Project's implementation of the C standard library.
When we think of protecting our information online, it’s usually in the context of traditionally sensitive data-- credit card numbers, addresses, SSNs, and so on. But as anyone who has taken a picture of themselves wearing nothing but a smile can tell you, the information exchanged during online dating can be just as personal. I haven’t done that, though. Ever. I have never done it.
As the digital economy has matured, so has the recognition that cyber risk cannot be eliminated; it must be managed. Insurance is the mechanism by which we distribute risk so that rare but catastrophic events don't ruin the unfortunate person (or company) that they happen to. Accurately pricing cyber insurance, however, is still in its infancy. Comparing the methods for assessing cyber risk to those used in property and casualty insurance points the way forward for better methodologies.
The answer is simple: because it's highly profitable. Credit card numbers are still the best we've got for transacting digitally and health records are 10 times more valuable on the black market. And despite efforts from the infosec community at large, cybercrime continues to increase in frequency and severity. The more important and difficult question is not why, but how—that is, how can companies not just survive, but thrive in a landscape of digital threats?
With the rate of data breaches increasing along with the complexity of modern IT infrastructures, the cyber insurance industry has been experiencing significant growing pains. Cyber risk determination had historically been done with employee surveys or contextual information about industries at larger. Without reliable data on an organization’s actual working state, many insurers came to realize there was no way to formulate a fair and accurate cyber insurance policy, especially for more complex and ever-changing IT environments.
From day one at UpGuard, we have been all about visibility. Before you can automate, validate desired or detect unwanted changes, you must first know what your infrastructure looks like; you must have a starting spot. We take the same approach to assessing cyber risk.
For as much as "cyber risk" sounds like a 1990's board game involving robots, cyber risk is actually serious business—in fact, it is continually becoming more important as organizations old and new find themselves relying on a variety of connected technologies and services. And as we enter 2016, the risk of data breaches in particular threatens to hamper business innovation. So what is cyber risk, and what can be done about it?
In what is being described as a landmark case, Nevada-based casino operator Affinity Gaming is suing cybersecurity firm Trustwave for inadequately investigating and containing a 2014 data breach. The lawsuit not only marks the first time a security firm is sued over post-breach remediation efforts—it also highlights the complexities around managing cyber risk for high risk organizations in today's threat landscape.
As the saying goes, there are two certainties in life: death and taxes. As we all look ahead to 2016, it’s clear that a third certainty has entered the mix: breaches.
Call it an experiment gone wrong: a bug in a test feature of the OpenSSH client was found to be highly vulnerable to exploitation today, potentially leaking cryptographic keys to malicious attackers. First discovered and announced by the Qualys Security Team, the vulnerability affects OpenSSH versions 5.4 through 7.1. Here's what you need to know about bug, including remediation tips.
One of our main objectives is to explain the costs of unplanned outages and help you prevent them from ever occurring in the first place. It's never merely time and money lost—customer trust and your reputation take hits, too. We've written many articles about it and work with companies on improving their service reliability every day.
Yes, it's that time of the year again. Time for global electronics vendors and eager enthusiasts from far and wide to converge at the world's largest annual consumer electronics/technology tradeshow. CES 2016 is in full swing, and IoT innovations have unsurprisingly taken center stage once again. Of course, who can forget the debut of Samsung "Smart" Fridge at last year's show, followed by the publicized hacking of the device soon thereafter. Judging by this year's exhibitor turnout, consumers can expect to see more hacked IoT devices making headlines in 2016. The following are the top 7 hackable IoT devices to watch out for at CES this year.
The election year is officially underway, but for non-voters and the apathetic—another reason not to register to vote has surfaced: on December 20th, 2015, a security researcher discovered a publicly exposed database of 191 million voter registrant records—names, addresses, dates of birth, phone numbers, party affiliations, state voter IDs, and more—posted online and freely accessible.
2015 may have come and gone, but the effects of last year's data breaches are far-reaching—for both millions of consumers and internet users as well as the companies and organizations whose systems were breached. Such events are no less devastating in terms of brand damage, and 2016 will undoubtedly bring forth a heightened collective security awareness in both organizations as well as consumers.
The figures are staggering: 21.5 million records containing social security numbers, names, places of birth, addresses, fingerprints, and other highly sensitive personal data—stolen by cyber attackers.
It's been barely a month since the VTech data breach resulted in the theft of over 6.4 million children's records, and yet another massive compromise affecting kids' data privacy is upon us—this time involving venerable children's toy and accessory brand Sanrio (of Hello Kitty fame). The data leak resulted in the exposure of details from more than 3 million user accounts: first/last names, birth dates, genders, countries, and email addresses, all openly available to the public. With children becoming prime targets for cyber criminals seeking low hanging fruit, companies that deal with and manage minors' data are increasingly under pressure to bolster their security controls and practices.
Last week was a busy one for leading network and security appliance manufacturers FireEye and Juniper Networks. Critical flaws were discovered in hardware products from both vendors, bringing the distressing but unavoidable question to the forefront once again: what recourse is there when the very security mechanisms in place to protect our data assets are themselves highly flawed?
As you may recall, earlier last month HP completed its division into two parts: an enterprise focused products/services entity—HP Enterprise (HPE)—and a personal computing/printing firm known as HP, Inc. CEO Meg Whitman gave a nod to DevOps-enabled organizations such as Vimeo and Uber at the initial announcement of the split half a year ago at HP’s Discover conference, presumably setting the course for a newly DevOps-focused HPE in helping companies scale ideas to valuation. How does an IT giant go about transforming itself from an aged enterprise monolith to an agile, open, service-oriented solutions provider for today's business IT environments?
There can be absolutely no question anymore that DevOps isn't just a fad—it's here to stay, it's a big deal, and it's coming to the enterprise. Speakers from relatively new companies like SurveyMonkey and Docker took the stage at the 2015 DevOps Enterprise Summit in San Francisco alongside old standards like IBM and General Electric to prove that the transition to a DevOps culture in established enterprises is not only possible, but probably inevitable.
What's the difference? The former offers no legal recourse, at least for now. Just in case you've been de-sensitized by the recent ongoing barrage of security compromises, the latest data breach involving electronics and educational toy manufacturer VTech is sure to instill new fear in the hearts of parental consumers, putting at stake the one thing they arguably hold nearest and dearest: the safety of their children.
Methodologies and frameworks may come and go, but at the end of the day—tools are what make the IT world go 'round. DevOps is no exception: as the term/practice/movement/[insert-your-descriptor-here] rounds its 6th year since entering public IT vernacular, a bounty of so-called DevOps tools have emerged for bridging development and operations, ostensibly to maximize collaborative efficiencies in the IT and service delivery lifecycle. Subsequently, a common issue these days is not a dearth of competent tools, but how to integrate available tooling into one cohesive toolchain.
Polylithic, vendor-neutral, platform agnostic. Microsoft may not exactly come to mind when hearing these descriptors, but it will soon enough—if recent developments are any indication. And despite the software behemoth's DevOps zeitgeist purveyance as of late, open source initiatives have always been alive and well inside Redmond’s hallowed walls.
At the start of the year, the FBI issued an alert warning internet users about the rising threat of ransomware, detailing its dramatic increase in both frequency and sophistication. Looks like the feds were on point: as it stands, 2015 has turned out to be a record year for data hostage-taking. So what can be done to defend oneself against this new insidious threat to data sovereignty?
There's a classic line (one out of many) in the movie Casino by DeNiro's character Ace Rothstein: "Since the players are looking to beat the casino, the dealers are watching the players. The box men are watching the dealers. The floor men are watching the box men. The pit bosses are watching the floor men. The shift bosses are watching the pit bosses. The casino manager is watching the shift bosses. I'm watching the casino manager. And the eye-in-the-sky is watching us all.”
By now, you've probably heard of software-defined networking (SDN): the emerging IT paradigm that abstracts networking hardware into programmable components for unprecedented data center agility and flexibility. In the same vein, parallel infosec developments currently underway are transforming rigid and complex physical security architectures into highly-adaptable, easily-managed, and ubiquitous mechanisms for IT security. This is software-defined security (SDSec)—a new model of infosec that just might save us from digital armageddon.
Advertising-based revenue models may be a standard facet of today's internet businesses, but firms peddling free/freemium services are still on the hook for providing strong information security to their user bases. In fact, they arguably have an even greater responsibility protect user data than paid-for services. So how do events like yesterday's massive data breach involving free web hosting providing 000webhost transpire? In a word, negligence. Gross negligence, to be precise.
UpGuard's platform for integrity monitoring can exorcise your vulnerability demons automatically and painlessly. Try it on us this Halloween-- no money, crucifixes, holy water, wooden stakes or garlic cloves required.
The Network Time Protocol (NTP) has been seeing quite a bit of publicity this year, starting with the NTP Leap Second Bug in June promising—but greatly under delivering—digital calamity of Y2K proportions. Ultimately, the fallout resulted in little more than sporadic Twitter interruptions, but last week newly discovered critical vulnerabilities in the timeworn clock synchronization protocol have increased the urgency of recent NTP-hardening projects like NTPSec.
It's practically a national tradition that Americans collectively spend about one year out of every four obsessing over the group of people who are in the running for a job which is undoubtedly awful to actually have. Every part of their campaign is put under heavy scrutiny—their clothes, their hair, their past, their associations—and today, their websites. Let's examine how candidates are fairing online using data from tools such as BuiltWith, Alexa, Google and Twitter.
Known vulnerability assessment– evaluating a machine's state for the presence of files, packages, configuration settings, etc. that are known to be exploitable– is a solved problem. There are nationally maintained databases of vulnerabilities and freely available repositories of tests for their presence. Search for "free vulnerability scanner" and you'll see plenty of options. So why are breaches due to known vulnerabilities still so common? Why, according the Verizon Data Breach Investigation Report, were 99.9% of the vulnerabilities exploited in data breaches last year over a year old?
Technology conference season is in full swing, with so many events going on that even large ones like PuppetConf and Amazon Re:Invent have been forced to overlap. While part of the UpGuard team traveled to Las Vegas, two of us stayed in San Francisco for a different style of conference. Far from the madding crowds of general interest vendor-backed extravaganzas, we presented at FinDEVr, a conference with a few hundred people and a sharp focus: improving the technology of financial services.
UpGuard's core functionality solves a really basic problem– how is everything configured and is it all the same across like nodes– by scanning configuration state and visualizing anomalies. We're pretty happy with how we've solved that problem so we've started expanding to other fundamental problems that deserve elegant solutions. One of those is vulnerability management. Sure, there are ways to detect vulnerabilities today, but they suck to use and are over-priced. Since we have the core architecture in place to scan and evaluate machine state, testing for vulnerabilities is a natural addition.
Though the widely publicized failure of the ObamaCare website (a.k.a Healthcare.gov) back in October of 2013 has all but faded from memory, the public sector’s persistent lag in technological innovation coupled with recent calamitous data breaches means there is no shortage of press fodder for critics. What will it take for the U.S. government to transcend its current dearth of agility and innovation?
The banking and finance sector has been hit particularly hard by cyber attackers this year—the month so far has seen disclosures from Scottrade, E-Trade, and Dow Jones regarding customer data breaches. It’s become readily apparent that industries dealing in the world’s most sensitive and critical data are poorly poised to defend against the rising threat of cyber crime.
Researchers at Trend Micro have discovered a new zero-day vulnerability in the much-maligned Adobe Flash Player that leaves users vulnerable to remote attacks. The exploit code is being used by the politically-motivated cyberespionage group Pawn Storm in a widespread spear phishing campaign targeted at various government entities. Adobe has yet to patch this vulnerability and will likely issue an emergency fix in the next couple days. Here's what can be done in the interim to protect yourself.
By now, news of the Experian/T-Mobile hack has traveled far and wide, stirring up public ire and prompting demands for a broader investigation around the data breach. And while the event is just one of many high profile compromises to make headlines lately, it stands out from the rest for a number of reasons. How does the rising tide of cyber threats impact consumers in a world that revolves so heavily around credit?
Microsoft announced on Tuesday that a serious remote code execution flaw in Internet Explorer could allow remote attackers to gain access to Windows systems. Unfortunately, no versions of Windows are spared from this critical flaw, and users are highly recommended to patch their systems immediately to avoid being exploited.
Frequent fliers and international travelers are well familiar with these seatback devices (i.e., in-flight entertainment consoles) that serve as the only connection to the outside world while cruising at 30,000 feet. Soon, however, wifi on commercial flights will be generally available, rendering these devices obsolete—at least to the average laptop-toting flyer. This raises a series of concerns around their future obsolescence and resulting security gaps, as well as the potentially grave consequences of compromised wifi networks on planes.
The insurance industry has been consistently targeted for cyber attacks as of late, for good reason: sensitive data is at the heart of every process—from handling health insurance claims to archiving medical histories. And because medical records are worth ten times more than credit card information on the black market, firms handling said data are required to take extra precautions in bolstering information security. However, every once in a while hackers are granted freebies—as was the case recently with Systema Software, a small insurance claims management solution provider.
We've just updated the architecture of our Policies feature to optimize them for scale and usability. Once you've scanned your first node, creating policies to validate desired state is the next step.
UpGuard's "three waves" methodology helps businesses achieve digital maturity through a three step process: gain visibility, establish test driven infrastructure, and then automate what you can also validate. In our last release we focused on improving visibility with an improved data visualization, a search engine, and group differencing. Now we've revisited our testing platform to make both incremental improvements and fundamental changes.
Done wrong, as they often are, company values are bullshit. They are bullshit in the sense Harry Frankfurt defines in On Bullshit: empty assertions designed only to satisfy some tactical need, worse even than lies in their distance from the truth. "When an honest man speaks, he says only what he believes to be true; and for the liar, it is correspondingly indispensable that he considers his statements to be false. For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says."
Integration capabilities these days serve as a litmus test for a software solution’s longevity: the degree to which it can play well with others ultimately determines how much long-term value can be realized from the platform. Monolithic solutions are falling to the wayside as enterprise complexity—both from a business and IT infrastructure perspective—requires an ecosystem of complementary tools to effectively manage today’s environments.
Though still a relatively new player on the market, group messaging upstart Slack has steadily expanded its footprint into the business and enterprise arena with its polished, streamlined offering for team collaboration. For the uninitiated, Slack is essentially a tool for collaborating amongst teams—chat rooms on steroids, if you will. And like UpGuard, Slack’s integration capabilities are among its most lauded features. When used in conjunction with each other, the two together can give organizations a highly effective feedback loop for staying on top of system/configuration changes and vulnerabilities.
Technology professionals walk a perpetual tight rope between innovation and security—new computing paradigms emerge and IT security scrambles behind to catch up. Nowhere is this more evident than in cloud computing and the rising frequency of data breaches targeting cloud infrastructures. And as computing enters another transitional epoch—namely the age of the Internet of Things (IoT)—similar challenges are emerging, but with much more at stake this time around.
A rising concern amongst IT professionals is the degree to which security vendors and products are themselves susceptible to compromises. This past weekend critical flaws were discovered in the products of not one, but two leading security vendors: FireEye and Kaspersky Labs. Because all systems are exploitable—even security products—a layered approach to security is crucial for maintaining a strong security posture in today’s cyber landscape. Enterprises heavily reliant on a single monolithic solution are best advised to diversify their security strategies to combat ongoing threats.
UpGuard is built to answer the fundamental questions of configuration management: how are my systems configured, are they configured correctly, what's changed since yesterday, what's for lunch– the stuff you absolutely need to know. In its first release, UpGuard satisfied the first three by scanning and recording configuration state, continuously testing with policies, and giving users the ability to difference configuration state over time or between nodes. But one thing was missing: the ability to difference a group of nodes all at one time.
For those still holding out for a better alternative to SSL, it’s time to give up the ghost. Though implementations like OpenSSL have seen many a vulnerability as of late, the protocol remains the best ubiquitous technology we have for end-to-end encryption. And with Google’s announcement last year regarding SSL’s impact on a website’s search rankings, the question stands: why are so many organizations still holding out on implementing SSL site-wide?
From rudimentary topologies to multi-cloud deployments, UpGuard was designed to provide end-to-end visibility for all types of infrastructures. Our platform gives organizations unprecedented macro and micro-level visibility in even the most complex and heterogeneous IT environments. And now—with UpGuard’s powerful new Search feature—identifying and locating items of interest or concern is as easy as typing text into a search box.
More than ever, UpGuard provides the ability to know how your environments are changing and to identify the deviations that increase your risk for failed change, outages, and security incidents. Here we quickly cover how UpGuard addresses the needs that every IT organization has through visualizations that allow you to start solving your problems today.
In a news flash buried beneath a slew of other notable security news items, UCLA Health revealed last week it was the victim of a massive data breach that left 4.5 million patient records compromised. Like previous attacks on Anthem and Premera Blue Cross, the intrusion gave hackers access to highly sensitive information: patient names, addresses, date of births, social security numbers, medical conditions, and more. And while matters around healthcare IT have taken center stage as of late, the ineffective security at leading institutions of higher education and research is equally distressing.
For those of you harboring secrets behind a website paywall, a word of warning: your skeletons are now easy targets for cyber criminals and nefarious 3rd parties around the globe. The recent data breach and compromise of 3.5 million Ashley Madison user accounts may turn out to be largest case of broad-scale extortion the world has ever seen, but for many—the outcome is hardly surprising.
Oracle released a critical patch on Tuesday to fix a whopping 193 new security vulnerabilities across its line of database solutions and products. Included in the update are fixes to 25 vulnerabilities in the Java platform alone, including a new high-risk, zero-day vulnerability already used in several high-profile, yet-to-be publicized attacks.
The OpenSSL Project Team announced a high severity bug in their open source implementation of SSL today that could allow the bypassing of checks on untrusted certificates (read: man-in-the-middle attacks). Find out which versions of OpenSSL are impacted, and what you need to patch this critical vulnerability.
For those of you planning on enjoying the sunset on June 30, 2015—an extra second of bliss awaits, compliments of the Earth’s inconsistent wobble. However, if Y2K sent you running for the hills, start packing again. Analysts predict technological fallout ranging from undeliverable tweets to outright digital armageddon, but for faithful IT folks with more grounded concerns like SLAs and business continuity, keeping critical systems up and running trump all other concerns. Fortunately, resolving potential issues related to the Leap Second Bug is a fairly straightforward matter—as long as you know what to look for and where to find it.
Full stack development is all the rage these days, and for good reason: developers with both front-end web development skills and back-end/server coding prowess clearly offer substantially more value to their respective organizations. The ability to traverse the entire stack competently also makes interacting and cooperating with operations and security an easier affair—a key tenet of DevOps culture.
Networking giant Cisco recently released its Annual Security Report highlighting trends in data breaches and threats from the previous year, and its findings—while similar to other recent reports (e.g., Verizon DBIR, Trend Micro Security Roundup)—offer some unique insights regarding the current threat landscape. No stranger to IT security, Cisco details in its report shifting patterns in cyberattack methods, emerging vulnerabilities, and best practices on how to mitigate future threats.
Sports is big business, and where money and competition collide—laws will be broken. This aptly describes the latest hack involving the St. Louis Cardinals and Houston Astros, though admittedly—it sounds more like a teaser for a Hollywood blockbuster. Corporate espionage in sports has largely been a nascent phenomenon but will soon become commonplace as intrusion methods grow in sophistication and data moves into the cloud.
The short answer: it’s not. This was certainly the case for Kaspersky Labs, who announced yesterday that its corporate networks were hacked using a sophisticated advanced persistent threat (APT) dubbed Duqu 2.0. Though the word “sophisticated” is used rather liberally these days when describing data breaches, this new threat is by all accounts the most advanced of its kind.
The question is indeed a contentious one, never failing to incite heated arguments from all camps. Many ways exist to cut the cake in this regard—WhiteHat Security took a stab at it in a recent edition of its Website Security Statistics Report, where it analyzed statistics around web programming languages and their comparative strengths in security.
When it comes to IT security, how do you roll? Many tools exist, but the fact is that in most cases, to do it right— you have to roll your own. This is especially true in today’s environments, where infrastructures can vary widely in composition from organization to organization. The truth is that factors such as degree of DevOps and Agile adoption, skill set of IT staff, corporate culture, and even line of business come into play when crafting a security solution for an organization. How well these tools align with the organization ultimately dictate the success and failure of a company’s security architecture. And when existing tools don’t fit or don’t work well, sometimes the only option is to build them yourself.
Databases—like all IT assets—are subject to drift that can wreak serious havoc across an organization’s infrastructure. Furthermore, the usual suspects are in play when it comes to database drift: manual ad-hoc changes, frequent software updates/patches, and general entropy, among others. Undetected malicious activity and attempts to compromise database security are also growing causes of database configuration drift. Monitoring for these unexpected changes should therefore be a critical component of any information-driven organization’s configuration management (CM) activities. To this end, UpGuard is happy to announce that support for database node types is now available.
Home Depot. Target. Neiman Marcus. Albertsons. Michaels. Most Americans have shopped at one of these national chains recently. If you’re one of them, your credit card information may already be on the black market. And if you’re a retailer using a POS system, proposed legislation like the The Consumer Privacy Protection Act may hold you financially accountable in the event of a data breach. Here’s the skinny on RAM scraping, and what can be done to prevent it.
On March 18, 2015, system administrators and developers received ominous news: two high severity vulnerabilities in OpenSSL would be announced the next day. Since Heartbleed, OpenSSL had been on a bad streak, and it looked like things were only going to get worse. Operations, development, and security teams braced for impact and then– it wasn't really that bad.
Every year, Verizon compiles data from a list of prominent contributors for its annual report highlighting trends and statistics around data breaches and intrusions from the past year. The 70-page Data Breach Investigations Report (DBIR) covers a myriad of data points related to victim demographics, breach trends, attack types, and more. Reviewing these shifting security trends can give indications as to how well-postured one’s organization is against future threats. And just in case you’ve got your hands full patching server vulnerabilities, we’ve done the legwork of expanding on a few critical key points from the report.
Today, a new vulnerability called VENOM was announced in CVE-2015-3456. It stands for “Virtualized Environment Neglected Operations Manipulation” which sounds, frankly, like an indictment of anyone aloof enough to let it sneak up on them. And wading through other blog posts on the subject—with their snake-related clipart and all—is like looking through the first few pages of the book when you visit a tattoo shop. Here’s the gist from its discoverers at CrowdStrike:
The Ponemon Institute just released some unsurprisingly bleak findings in its annual study on healthcare data privacy/security, including data showing deliberate criminal attacks now accounting for most medical data breaches. The report goes on to illustrate how the healthcare industry— sitting on a treasure trove of valuable data— is ill-equipped to counter these attacks. Perhaps forward-thinking enterprise healthcare leaders should start considering DevSecOps as a viable strategy for surviving the perils of the information age.
Technology giant Lenovo has come under heavy criticism again for subjecting users to undue security risks– this time in the form of three vulnerabilities discovered by researchers at security firm IOActive. Flaws in Lenovo's System Update service– a feature that enables users to download updated drivers, software, and security patches from Lenovo-- enables hackers to surreptitiously slip malware onto user’s laptops and systems through a man-in-the-middle attack. Lenovo has since issued a patch for these vulnerabilities, but it’s doubtful the PC giant will regain consumer credibility any time soon.
Yesterday, open source content management system (CMS) WordPress made headlines with the announcement of yet another critical zero day vulnerability. The newly discovered flaw is markedly different than other WordPress vulnerabilities surfacing as of late― in this case, the problem exists in WordPress’ core engine and codebase, rather than 3rd party plugins and extensions. WordPress.org was quick to release a patch to fix the vulnerability and has since advised users to upgrade to WordPress 4.2.1, the latest version of the CMS.
In a widely publicized report released last week titled "FAA Needs a More Comprehensive Approach to Address Cybersecurity As Agency Transitions to NextGen," the US Government Accountability Office (GAO) details the potential vulnerabilities and dangers of offering in-flight wifi services during air transit. By essentially granting customers IP networking capabilities for their devices, airlines may be opening up their avionics systems for attacks:
Whenever there's a lot to lose, UpGuard is the solution to ensure correct configuration state. Often this means working the enterprises in banking, transportation, and ecommerce, but the Internet of Things introduces risks to the most mission critical system of them all: your home.
The fate of CSO John in The Phoenix Project is a good parable for illustrating the dynamic and often conflicted relationship between Security and IT Operations. Security can either become a separate, obscure, and increasingly irrelevant group that everyone else resents–sounds pretty good, huh?–or it can be integrated into broader framework of the development cycle. Security John goes through a mental breakdown before finally understanding how to adapt and survive, but it doesn't have to be that hard.
As a group of concepts, DevOps has converged on several prominent themes including continuous software delivery, automation, and configuration management (CM). These integral pieces often form the pillars of an organization’s DevOps efforts, even as other bigger pieces like overarching best practices and guidelines are still being tried and tested. Being that DevOps is a relatively new paradigm - movement - methodology - [insert your own label here], standards around it have yet to be codified and set in stone. Organizations are left to identify tools and approaches most suitable for their use cases, and will either swear by or disparage them depending on their level of success.
If you're one of the unfortunate ones who woke up to a frantic text from their boss this morning, there's some small consolation: today's OpenSSL vulnerabilities probably aren't as horrific as Heartbleed! Hooray, great job everyone! The bad news is that you still have to patch your environment, and before you can even do that—do you even know what you've got? There's a kind of configuration "fog of war" over IT that's been a fact of life for as long as IT has been around, especially in established environments. Sure, you could manually dig into each machine and run openssl version, or spend the afternoon scripting a solution if you're fancy, but that amount of work will only get you through today. You need to make room in your tool chest for a universal configuration scanner and system of record.
Sarbanes-Oxley (SOX) compliance—it’s like checking for holes in your favorite pair, but with consequences beyond public embarrassment. For publicly traded companies, the ordeal is a bit like income tax preparation for the rest of us: a painful, time-consuming evil that—if not carried out judiciously—may result in penalties and fines. Throw in an additional bonus of prison time for good measure, if you’re a C-level executive and discrepancies are found on your watch. Yes, the SEC is serious about SOX compliance, and you should be, too—especially if you’re in IT.
This week, Apple’s App Store and iTunes Store suffered a downtime of about 10 hours. For the better part of the day, customers were unable to access the stores, purchase music or apps, or make payments using the Apple Pay payment system. The problem has been attributed to “a configuration blunder” of its DNS setup.
Audits are one of life’s greatest pleasures, right up there with root canals and childbirth. Firms love them, too; alongside tax audits-- financial audits, records audits, and compliance audits make life splendid for businesses. Unfortunately, compliance is an unwieldy but necessary evil-- that is, unless you’re America’s 2nd biggest health insurer.
We rewrote the UpGuard agent as a connection manager to reap the benefits of agentless monitoring. Why get rid of agents? Because agents must be updated. They are like a free puppy–it's easy to take them home but you have to feed them, take them to the vet, and clean up after them for years afterward. The new connection manager allows for an agentless architecture while keeping all SSH/WinRM activity behind your firewall. It's fast, light, easy to maintain, and secure.
Microsoft has announced a vulnerability in Samba, the widely used SMB/CIFS protocol for Windows/*nix interoperability. The vulnerability exists in versions 3.5.0 to 4.2.0rc4 and allows malicious clients to manipulate the host such that clients can execute code via a netlogon packet.
We know you're sick of updating OpenSSL so we'll keep this short. There is a new SSL vulnerability named FREAK with a published proof of concept. FREAK affects a significant portion of websites, including big names like American Express and the NSA. Like POODLE, FREAK takes advantage of support for legacy cryptographic protocols.
In Part 1 of this article, we presented an overview of Amazon AWS and UpGuard, and discussed how the two marry the best in cloud computing and DevOps. We also learned how UpGuard is not just the premier solution for configuration monitoring, control and automation of AWS offerings like EC2 and S3, but can also work with any number of RESTful services. But enough waxing philosophical—time to put theory into action. And what better way than to follow a fictional organization as it sets up UpGuard monitoring for its AWS cloud infrastructure?
It's not pleasant to think about, but the fact is that when we go to work we are expected to do things. But what are the things that need doing? If we can answer that question without hours of meetings or dozens of emails we can finish our work and do...other things. UpGuard's new Tasks feature provides a lightweight project management system designed especially to maintain quality in a rapidly changing environment.
In July of 2014 Jon Hendren, also known as @fart, began a journey to become a DevOps thought leader. Using his audience of 70k+ followers on Twitter, he spread a simple message: Jon Hendren is a DevOps thought leader.
Over the years, Amazon has become the poster child for all things cloud-related, and for good reason: as one of the initial vendors to embrace the cloud computing paradigm, they were the first to offer widely accessible commercial cloud infrastructure services when it launched EC2 and S3 as part of AWS back in 2006. And now, almost a decade later, the tech giant continues to dominate with a 27% market share of the cloud services market. It's therefore not surprising that for many, Amazon comes to mind first when thinking of cloud computing.
When we set out to create a cloud-based tool for configuration monitoring, we used the tools we knew and wrote UpGuard using JRuby. For our application, JRuby had many good qualities: getting started only required a one line install, the agent only needed to talk out on port 443, and it was platform agnostic. Using JRuby we demonstrated the value of system visibility, attracted our first cohort of customers, and raised the funds to expand UpGuard. Now we're not only scrapping that agent, we're moving away from agent-based architecture altogether. Here's why.
In a recent episode of the Enterprise Initiatives podcast, our own debonair cofounder Alan Sharp-Paul sat down with host Mike Kavis to talk DevOps, and especially one particularly memorable blog post in which Alan advised that's not wise to look before you leap and "don't automate what you don't understand." That point has been known to cause some contention among certain DevOpserati who often maintain the movement is primarly based on a cultural shift coupled with automation.
This week Qualys announced a vulnerability in certain versions of glibc that is now being called GHOST. The vulnerability allows remote execution of code by calling gethostbyname() and is considered critical. We won't cover what others have already said: you can read the original Qualys post here, a summary from ZDNet here, and advice on updating your OS version here. If you aren't sure what version of glibc is used on every one of your Linux machines, read on. We have created a one-click solution for validating the security of all your nodes.
UpGuard was initially designed to solve the problems we faced every day in the world of enterprise IT. Technical debt, documentation rot, and configuration drift consumed untold hours of our lives. UpGuard was designed to make those problems a thing of the past.
As a trusted partner of financial institutions, healthcare providers, retailers, and businesses of all kinds, we take seriously our responsibility to handle your data securely. UpGuard's container-based architecture, compliance testing, encryption at rest and in transit, and bug bounty program bolster our code-level security. But there is always the human factor, and for that we have two factor authentication.
There's no doubt that in 2015 DevOps is real, and strong, and it is your friend. If you aren't investing in DevOps now, you should be. Ask anyone, or just be quiet while they yell at you, and you'll hear that you need DevOps. We can get behind that to a certain extent. We love the principles of DevOps, we take it seriously in our own development practices at UpGuard, and we design our software to be equally usable by Devs and Ops to solve their shared problems. We've been listening and contributing to the DevOps conversation for a few years. Here's the problem: almost nothing has changed in that time.
Email is a mission-critical application that is relied on to power business communication and collaboration capabilities on a day-to-day basis. It is a vital component of modern business and being able to send and receive email securely and reliably is of paramount importance. If you were to make a list of applications to track and control configuration changes of, email would be at the top of that list.
We've seen a landslide of vulnerabilities announced in the last few months, from ShellShock to Poodle, and it looks like that trend will only continue. The discovery of a critical vulnerability in Windows SChannel–and the even worse problems introduced with a hasty patch–has added a heap of unplanned work for Windows IT pros. UpGuard provides a really easy way to validate that the update has been successfully applied and the registry keys deleted. In addition to giving you validation that patches have been applied now, our Schannel check can be run automatically to protect against regressions.
UpGuard attended the DevOps Enterprise Summit recently, and we had a blast. We talked to people non-stop for three days, gave countless UpGuard demonstrations, caught a few talks, made some new friends, and learned a lot from attendees about the kinds of challenges they face implementing DevOps. (And hey, did you guys try those breakfast burritos they had on day 2? Delicious.)
A vulnerability was recently announced by Google, named POODLE, which targets SSLv3 connections. SSLv3 is an older encryption protocol in the SSL/TLS family. Most modern browsers default to newer versions of TLS instead of SSL, e.g., TLSv1.2.
News about the major bash vulnerability dubbed Shell Shock is reaching far and wide at the moment, and for good reason — its effects have the potential to reach even further than its distant cousin Heartbleed had previously. IT departments have been scrambling not only to patch machines, but to even find affected machines on their own networks. As config monitoring becomes commonplace, however, today's headache will probably be remembered as something that could've been just a simple nuisance. While both OpenSSL (responsible for Heartbleed) and the bash shell (where Shell Shock gets its name) are found in datacenters and businesses in every corner in the world, that's where the similarities end. The mechanisms exploiting the two vulnerabilities are entirely different, despite the tech media continuing to compare the two.
Some people, we won't say who, have taken to poking fun at the idea of thought leadership in DevOps. We'd like to set the record straight: here at UpGuard, the only problem we have with thought leaders is that there aren't enough. Since we believe in continuous improvement, we've taken the first step to addressing this issue. With our elegant "DevOps Thought Leader" shirt anyone can be part of the DevOps intellectual elite.
When you want to win, you don't attack where your opponent is strongest; you hit them where they're weakest. Quarterbacks throw to the receiver covered by an injured corner, bike thieves look for the bike with the weakest chain, and lions drag down the wildebeest at the back of the pack. The larger the surface area, the more likely there is to be variation in the strength of defense, and the larger the difference between the strongest and weakest points.
In theory, DevOps is good for every business. But if there's one thing I've learned from talking to people in the DevOps community, it's that theory doesn't always translate perfectly to reality. Theory is an advertisement; reality is a data set. That's why UpGuard partnered with Microsoft to sponsor a DevOps study from Saugatuck Technology.
There’s no right place to start with DevOps, but there are reasons that different people choose to start. There are also ways of communicating that make it more likely to take succeed in your organization. Being aware of the people you are talking to and the processes they work within can make your DevOps experiments more likely to grow into a business-wide culture.
Imagine this — you're rolling out a new version of your web app. Works great in the dev environment, and it's been signed off on in staging, so it gets rolled out to production. Things seem fine, so you call it a night. Then the support requests begin flooding in. Something's broken somewhere, and it's not immediately obvious how. Performance monitor shows the machines are running well, so it can't be that. Ah well, better crack one of those neon-colored energy drinks, it's time to roll back and log into these machines to look through logs and config files for a potential cause. "How could this be happening," you ask, "I mean... these machines are all configured the same, right?"
Today we're proud to show one of our newest features to UpGuard: support for your CloudFlare powered website. As a next-generation CDN (Content Delivery Network) CloudFlare purports to make your site faster to load, optimize your content, provide a swathe of ridiculously powerful and easy-to-understand security mechanisms, provide exclusive analytics insights and even has an app marketplace. To give you an idea of just how big this Cisco combatant has become: As of 2016, CloudFlare delivers over 1 trillion page views per month The company has at least half a million customers. Claims to have protected those customers from hundreds of billions of incidents Adding your CloudFlare site to UpGuard is easy and enables you to discover, track and control all of your CloudFlare DNS and Zone configuration settings including A, CNAME, MX and SPF records.
Having just started working for UpGuard as a software engineer my journey understanding UpGuard and its place in the IT automation ecosystem is just beginning. This places me in a unique position to provide a series of blog posts that will start from the ground up in getting started with UpGuard. Today we'll work through the steps required to connect and scan a Ubuntu linux server using SSH.
"Did you really just say 'thought leader'?" Everyone laughed. The open space topic we'd gathered to discuss was "DevOps as an Echo Chamber." The room was full of people who wanted faster, more stable deployments, and none of them were getting help from the DevOps blog-industrial complex.
So a cat walks into a bar. No, that’s not right. He walks into a box. The cat gets bombarded with radiation. It used to be a bar but a lot of people died from the radiation so they turned it into a box. Is the cat dead or alive?
If you’re working with IIS then you know that preventing configuration drift is as important as it is time consuming. In the best case scenario you’re monitoring configs daily to keep development, testing, and deployment running smoothly. In the worst case—well, all-nighters make good war stories but aren’t much fun. A proactive approach is much better. UpGuard automates configuration testing at scale, to find out if your IIS servers are hardened and as expected. We'll look at how UpGuard can help with these five major problems as an example of what we do. Here are the top five critical configuration problems we see on IIS servers and how we fix them.
It's a topic that comes up frequently for us here at UpGuard. Our customers are always keen to know how they can take control and simplify their configuration management processes. We've all experienced at some time or another that issue that was the result of a database migration that didn't complete, a column that has mysteriously changed data type or an old version of a stored proc or view being restored to a new database.
I was perusing through Twitter-land recently and ran across a tweet talking about a DevOps meetup in the Los Angeles area that was underway. And it went on to denote that the first opening question posed to the entire group was: What are the minimum requirements for DevOps? Huh?~!
DevOps still lacks a commonly accepted definition, as discussed recently in the blog post The Problem with Defining DevOps. Kind of obvious in retrospect, but this really complicates meaningful usage of the word, particularly when used outside of the DevOps community.
You know what, I am starting to despair of the IT industry, just a little bit. I’ve been working in IT for just over 20 years - I was very lucky to ride the greatest wave we’ve seen, the dawn of the Internet (I worked for the largest corporate ISP in the world, UUNET, from early 1996 to 2001), and I’ve worked in the slowest, most immobile companies you can imagine (Investment Banks). And in the last 5–10 years I’ve seen less and less common sense be applied. And now we have this word that nobody can truly define. And it’s creating even more silliness. People are spending energy, lots of energy, debating this phrase and its definition (the irony of this post is not lost on me).
There's an old idea in Hollywood— if you can't pitch an idea in one sentence, it's too complicated. The term "DevOps" is about 5 years old, and the community still has no consensus on what that word really means, even though it's full of thought leaders who'll claim to be able to tell you.
DevOps is a relatively new concept in comparison to Agile development, so it should come as little surprise that IT enterprises have a myriad of experiences and instances of Agile approaches. And there is no need to throw everything out and start over - both Agile and DevOps are complimentary. But what if after careful deliberation inside of your enterprise you've decided to evolve from Agile to DevOps? How can you ensure that you keep all the good things that Agile provided yet apply some of the learnings from the early adopters of DevOps principles? Building a DevOps state of mind requires more than giving developers root, installing a configuration management tool, using a source code repository, and proclaiming ‘yes, we’re a DevOps shop.” At the end of the day all aspects of the people, process, technology continuums get impacted by DevOps. Here are 5 key steps to work through when implementing DevOps in an IT enterprise where Agile rules:
The rise of DevOps teams is upon us. The most recent State of DevOps survey found that 16% of respondents were part of a DevOps department with 55% of respondents self-identifying as DevOps engineers or systems engineers. Interesting. And if you simply Google ‘DevOps jobs’ you get over 4.5 million hits. So like it or not, this DevOps thing is going mainstream.
Most leading IT enterprises have some form of Agile development in place in their organization. Thereby, many organizations, websites, blogs, and companies exist to provide information about and support for Agile development. Here is a list of 10 key online resources to support your Agile journey.
Just like just a few years ago when we were in the early days of the cloud, we’re in the early days of DevOps. The DevOps Summit had by my estimation 50-100 people talking DevOps this year and I would imagine this will exponentially increase over the next few years as this topic continues to turn IT on it’s head. It is the shiny new toy for Enterprise IT! I was thankful to have the opportunity to be there and hear some of the best and brightest. Here are a few of the things I heard.
Puppet Labs just released the 2014 State of DevOps Report. The research team interviewed companies from multiple industries and various sizes, from startups to global firms with over 10,000 employees and had over 9,200 respondents in all. The report shows us that not only is DevOps working within the enterprise, but it is also driving higher employee satisfaction.
Here at UpGuard, we like to look at DevOps through the lenses of Collaboration and Automation. Almost all vendors in the DevOps space focus on the latter. We've written about how this creates Zero Sum DevOps and how these 'pockets of automation' can lead to silos of expertise around specific tools which is counter to DevOps principles to begin with. Furthermore, we've have talked about how there are 3 Reasons IT automation tools suck at collaboration.
DevOps is awesome. Or at least the promises being made with DevOps are: faster deployments….more stability, resiliency & availability….save tons of money through automation….and make IT more relevant to the business. I definitely can understand why there is such tremendous interest in DevOps. Break me off a piece of that!
It goes without saying that automation in the enterprise is critical to keeping up with today’s dynamic business demands. Unfortunately, automation isn't a set-it-and-forget-it process. You need to carefully monitor the environment to know exactly how much to automate and when to adjust for environment changes. To exasperate the issue, the concept of DevOps is still confusing to many and some still inappropriately equate DevOps to automation. But that isn’t stopping leading enterprises to create automation initiatives, have DevOps skunkworks projects popping up, and to name whole teams DevOps for the sake of it.
"The new phone book’s here! The new phone book’s here! Boy, I wish I could get that excited about nothing. Nothing? Are you kidding? Page 73 – Johnson, Navin R.! I’m somebody now! Millions of people look at this book everyday! This is the kind of spontaneous publicity – your name in print – that makes people. I’m in print! Things are going to start happening to me now." - The Jerk
For the past 3 months I've been publishing a series of posts around DevOps culture and lessons learned from Patrick Lencioni’s leadership book The Five Dysfunctions of a Team - A Leadership Fable. As much information as is contained here, the reality remains that teamwork ultimately comes down to practicing a small set of principles over a long period of time. DevOps success is not a matter of mastering subtle, sophisticated theory, but rather embracing common sense with uncommon levels of discipline and persistence.
This is the fifth in a series of posts around DevOps culture and lessons learned from Patrick Lencioni’s leadership book The Five Dysfunctions of a Team - A Leadership Fable.
It is almost summertime, so time to dust off your reading material and cozy up with a good book. Recently I asked our expert panel from our most recent DevOps webcast what was their number one resource they would recommend to a friend if they wanted to brush up on the ins-and-outs of Enterprise DevOps. And in truth, they had a hard time narrowing it down to just a few. But if you're looking to stock up your bookshelf on all things DevOps then you can't go wrong with this list of the Top DevOps Reading List.
As it has been said many times: DevOps is not a technical problem, it is a business problem. The struggle for a large, entrenched Enterprise IT shops can't be underestimated and the legacy factor has to be dealt with (aka. why fix something that isn't broken). However, there is mounting evidence to suggest that independent, discrete teams are in fact becoming more common in these large Enterprises. While the fully-embedded model (sometimes called NoOps because there is no visible/distinct Ops team) that the unicorns have deployed work for them, a more discrete team to learn how to 'do DevOps' makes a lot of sense for the larger Enterprise.
As IT managers and engineers, we can sometimes get so deep in the details of what we do that we struggle to answer the simple questions for our user base and the higher ups. Sure, we can write scripts to automate builds and we can train users on the tools to implement configuration management, but we can also freeze when asked why organizations should have configuration management teams, processes and tools. If this has ever happened to you, remember that you’re not alone.
For those of you who follow us more regularly here at UpGuard, you're preconditioned to know that we have a wicked sense of humor and love to have a good time. This week anoints ChefConf as the center of the DevOps universe as it brings together the who's who of thought leaders and practitioners. We thought it might be fun to play a friendly game of DevOps Buzzword Bingo during the event to entertain those in attendance and keep you on the edge of your seat waiting to fill out your card.
I recently had the opportunity to sit down with Mike Kavis (@madgreek65), VP/Principal Architect at Cloud Technology Partners, in preparation for a webcast we’re hosting next week and asked him a few questions. He was kind enough to entertain my line of questioning and provide some of his thoughts. If you don’t know Mike, you should, but here is what you need to know...
We received a lot of positive feedback regarding our last article on Controlling SQL Configuration Drift so thought it might be a good idea to continue along that same theme of analysis and follow it up with an article about DNS configuration and some simple steps you can take to prevent configuration drift.
This is the fourth in a series of posts around DevOps culture and lessons learned from Patrick Lencioni’s leadership book The Five Dysfunctions of a Team - A Leadership Fable.
Many large enterprises over the last decade made a deliberate shift to an agile development process as a response to the ever-changing demands of the business. One key tenet of the agile development process is to deliver working software in smaller and more frequent increments, as opposed to the the “big bang” approach of the waterfall method. This is most evident in the agile goal of having potentially shippable features at the end of each sprint.
If you do not feel you have a good handle on all the ways DevOps can benefit your enterprise and bring positive return on investment, you are not alone. While the concept of DevOps dates far back to 2009 (prehistoric times in our world!), the evolution and implementation of the procedures and tools that facilitate its use are still evolving. As has been discussed countless times - DevOps is not something you buy, it is something you do. And in order to 'do DevOps' you need to connect it to your business in a meaningful way to ensure long-term success. But let's pretend for a moment (shouldn't be hard to imagine) that your non-technical resources / upper-level management is holding out on making any changes that bring you closer to the DevOps principles of collaboration, culture and communication. How do you get them to invest in DevOps in your enterprise?
Trying to translate the concept of Configuration Management for those who do not understand its efficacy is like explaining surfing to an Inuit. It is simply not an inherent part of their culture. Without question, the benefits of Configuration Management can be challenging to grasp to the uninformed. One of the best ways to understand the benefits and use cases is to learn from other enterprise's experiences.
Controlling database configuration drift is a tricky subject. It's a topic that comes up frequently for us here at UpGuard and customers are always keen to know how they can go about taking control and simplify their configuration management processes. We've all experienced at some time or another that issue that was the result of a database migration that didn't complete, a column that has mysteriously changed data type or an old version of a stored proc or view being restored to a new database.
ASP.NET Applications get many configuration settings from their web.config or app.config file. Being able to run the same application across multiple environments used to mean keeping control of different copies of the config file to deploy or even worse manually editing the settings after deploying to each new environment. In recent years it has become possible to do transformations of the web.config files at deploy time using Visual Studio. No matter which method you use, deploying to a new environment and detecting drifting config settings has always been a problem. UpGuard helps to quickly and easily detect these sorts of problemsand make configuration management a breeze.
The UpGuard team is thrilled to announce the addition of Kevin Behr (@kevinbehr) to UpGuard's advisory board. You may know Kevin from his pioneering work he and Gene Kim (@RealGeneKim) have collaborated on over the last decade. Behr has over 25 years of experience in business management, technology and thought leadership as a Chief Information Officer, Chief Technology Officer, and Chief Operations Officer. In addition to his leadership roles in public and private companies, Behr also co-founded the IT Process Institute with Gene Kim, and served as its President for the first five years. Behr is the author of five renowned books, most noteworthy as the co-author of The Visible Ops Handbook and The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win.
I have a confession to make. My first job in IT wasn't as a rails developer in a hot startup. It wasn't managing cloud infrastructure. It didn't involve cool open source projects or cutting edge technology. Quite the opposite in fact. My first job was a graduate trainee analyst programmer at an Australian Funds Manager. What was I trained on? ADABAS NATURAL. Yep, I was a mainframe developer.
This is the third in a series of posts around DevOps culture and lessons learned from Patrick Lencioni’s leadership book The Five Dysfunctions of a Team - A Leadership Fable.
There has been plenty of discussion as of late regarding whether the DevOps movement has left the “enterprise” behind, plus where automation and the cloud fits in DevOps. There is more and more evidence that automation creates less collaboration and shows signs of a turf war between the chasm of tools that are needed to ‘do DevOps’. In the spirit of trying to address and debunk some of these myths, we asked Kevin Behr, the co-author of The Phoenix Project: A Novel and the VisibleOps Handbook to join us in a discussion about some of the trends plaguing enterprise IT as they struggling to align legacy IT infrastructure to business goals while becoming more agile.
Anyone who has been following what we're doing at UpGuard knows that we like to keep things simple. With this in mind, we like to look at DevOps through the lenses of Collaboration and Automation. Almost all vendors in this space focus on the latter. Why is this? Well, automation tool vendors do it by definition. In reality the collaboration angle is avoided by vendors because it is hard. If you're looking to the market for assistance in "doing" DevOps then you'll be drowning in offers for help with automation. Help with collaboration? Not so much.
Automation. If you're somewhere on the DevOps spectrum then it's surely good for what ails ya. The answer to all your problems. For many it defines their DevOps journey, its destination representing the promised land of stable environments, consistent builds and silent pagers.
Going from nothing to automation using one of the many tools available can be a daunting task. How can you automate systems when you’re not even 100% sure how they’ve been configured? The documentation is months out of date and the last guy to configure anything on that box has since left the company to ply his trade somewhere that will more fully appreciate his Ops cowboy routine.
One of the easiest ways to build applications programmatically into containers through Docker is to use a Dockerfile. Dockerfiles are the Makefiles of the Docker world. A ton of blog posts and tutorials have sprung up over the last few months about how to set up Docker, or how to set up a LAMP stack and even a LEMP stack in Docker.
We've been working with a lot of Windows shops recently and IIS configuration seems to be a big pain point for many enterprises. Other than a brief stint in mainframe purgatory after university, I started life as a .Net developer and these conversations reminded me of my fun with IIS back in the day. In reflecting on this, I realized that the developer/operations interaction around IIS configuration is a near perfect example of the type of conflict that gave birth to the DevOps movement.
I shouldn't have to explain the concept of configuration drift to most of you, but just in case, it is the phenomenon where running servers in an infrastructure become more and more different as time goes on, due to manual ad-hoc changes and updates, and general entropy. If you're more of a visual learner, I strongly encourage watching this video from Sesame Street.
DevOps and I sort of have a love/hate relationship. DevOps is near and dear to our heart here at UpGuard and there are plenty of things that I love about it. Love it or hate it, there is little doubt that it is here to stay. I've enjoyed a great deal of success thanks to agile software development and DevOps methods, but here are 10 things I hate about DevOps!
DevOps is a human problem and a leadership problem. Building a DevOps culture requires more than giving developers root, installing a configuration management tool, using a source code repository, and proclaiming ‘yes, we’re a DevOps shop.” At the end of the day all aspects of the people, process, technology continuums get impacted by DevOps. However, there is little doubt that the people aspect has the most to gain (and the biggest challenges) for anyone who is considering, or already on, the journey to becoming a DevOps ninja.
It is no secret that we here at UpGuard love DevOps and we're not ashamed of it. I know that opinions vary as to what exactly DevOps is or isn't, but the more important part of the movement is whether we as individuals want to push the limits of what we thought was impossible only just a few years ago. We've been 'doing DevOps' for some time and have a cautionary tale to tell as well, but we believe that DevOps can be transformational for IT enterprises and advocate for organizations to activate DevOps in their businesses. I know how we all love lists, so here is my Top 10 Things I Love About DevOps:
I always love going into those meeting rooms where there are different color post-it notes all over the room that looks like a 3M sales rep threw up everywhere. For the longest time I just considered it one of those strange things R&D did. Then one day I was extremely early for a meeting and actually got to spend some time studying what was cluttered all over the glass wall and I began to realize there was a definite method to the madness. This Kanban board concept wasn't just for the engineers, it was for everyone to see where work was being performed and its status. I loved the visual nature of it, and the fact that I could get accurate information without reading release notes or technical requirements documents was refreshing.
Update: This is a preserved post detailing new (at the time) UpGuard product features, enhancements, or tutorials. The screenshots below may be out of date and/or make reference to GuardRail or ScriptRock—old names for the same great product. There are also many newer features that will drive you wild. Node Groups A Node Group is a way of logically grouping Nodes with common functionality. Instead of managing the same set of Policies on each Node you can now manage one set of Policies on the Node Group that will automatically get applied to any Nodes in the Group. Their use is best highlighted with examples. All of your Linux servers might need to comply with an underlying security policy, group them together using a Node Group called "Linux" and apply your security policy there. Your front-end web servers are identical behind a load balancer, add them to a Node Group called "Front-end Web Server." How you organize them is up to you, they can be as general or specific as you like.
So you do a bit of IT automation. Maybe you throw in some functional testing for that IT automation too. You have monitoring. You have a top notch engineering team. You're doing enough then, right? Nothing could go wrong?
My mom used to tell me 'quality over quantity' back in high school when I was dating girls. Of course that meant that I completely ignored her and would date a girl if she was breathing. What in the hell would you expect an awkward 17 year old boy to do?! I've heard that same sentence used in lots of other ways too: when writing, when speaking, when eating, when working out, and so on. What does that have to do with DevOps? As I continue on my journey through the DevOps movement, it seems to me that we have a bit of a conflict here - the goal is to release at a higher velocity (quantity) with well tested code (quality). Is this really possible? I know that some of the 'high-performers' like Amazon, Etsy, Flickr and Netflix are proving that it can be accomplished, but I keep wondering if slowing down can actually help us deliver more extraordinary things.
We've all been hearing a lot of chatter in the media, in tech-geek circles and everywhere else you go in your daily life about Bitcoin. There was a really interesting OpEd piece in the NY Times yesterday from Marc Andreessen entitled Why Bitcoin Matters that essentially anoints Bitcoin as the greatest invention since the wheel. If you still are not clear as to what Bitcoin is, I would highly suggest watching this video that does an effective job of describing it. So if Bitcoin is the greatest invention of all time, why is it so hard for everyone to understand it, why does it have different definitions depending on who is talking, and why in the hell don't I own any?!
I've been thinking a lot lately about the intersection of DevOps and Information Security. I'm definitely not the first to have considered the implications, but I am undoubtedly a complete cynic when it comes to InfoSec and how it can align itself to the DevOps movement. Why am I cynic you may ask? Well, I spent almost 10 years in the security/governance arena interacting with CISOs and their teams trying to help them 'reduce risk' and 'pass audits', but I've watched countless organizations fail miserably. What is the main reason why? Because the business fails to see the value of security and doesn't understand it. Better said - the business invests in what the business understands.
You may have already heard (or experienced) that Dropbox suffered an outage late last week that took the popular file-sharing service offline for 2.5 hours with some services being out the entire weekend. It was recently reported by Dropbox on their blog that they were trying to upgrade some servers' operating systems Friday evening in "a routine maintenance episode" when a buggy script caused some of the updates to be applied to production servers, a move that resulted in the maintenance effort being anything but routine.
Many of the darlings of the technology universe have adopted DevOps as their approach, are pushing boundaries once thought untouchable and realizing tangible business benefits as a result. The 2012 State of DevOps Report states that high-performing DevOps shops can ship code 30x faster with 50% fewer failures.
We all know that DevOps is the glimmer in our executive's eye, the savior that will solve world hunger, and the most important thing to happen since the wheel was invented. But all joking aside, there is little doubt of the business benefits it can bring to organizations big & small. So now what?! You've decided (or been told) that DevOps is critical to your 2014 success, but where do you start and what are the foundational elements you must work through before claiming victory? Here are 4 prerequisites for DevOps success that you can use as your blueprint to making sure you achieve your business objectives.
Michael Davis over at InformationWeek just wrote a compelling article on DevOps and some of the mixed results that people are understandably getting into. There are a number of very interesting results that he shares as part of their DevOps survey including:
Wow! 2013 is over, done, kaput. It's hard to believe. Time flies when you're having fun (or building a business). Now is the time to look back though and reflect on what 2013 meant for DevOps, myself and UpGuard. This post will by no means be exhaustive. It's written through my eyes and based on my experience, and my head has been down working for large chunks of the year. I'll add an additional warning that it will be a little tongue in cheek. I make no apologies for that ;)
Going from nothing to automation using one of the many tools available can be a daunting task. How can you automate systems when you’re not even 100% sure how they’ve been configured? The documentation is months out of date and the last guy to configure anything on that box has since left the company to ply his trade somewhere that will more fully appreciate his Ops cowboy routine.
Tonight I gave a talk on comparing containers and generating Dockerfiles. Instead of providing the slides, which are pretty lame by themselves, I thought I'd write up the talk in a proper context. UpGuard has a number of use cases, one of which highlighted for the talk was migrating the configuration of environments from one location to another. Traditionally we have helped some of our customers scan their configuration state and generate executable tests based on those configuration items as well as allow scanned configuration from multiple machines to be compared.
There's a certain something in the air within the DevOps community right now. The movement is, to a certain extent, becoming a victim of its own success. For where there is buzz in tech, there is money. And where there is money, there are recruiters, there is marketing, there are misinformed and over-simplified tech articles and, let's face it, there are carpetbaggers galore.
Whether you've just registered for UpGuard, our cloud-based configuration monitoring platform, or are simply interested in checking out some of the things that are possible, this is a great place to start. Each video gives a quick introduction to a major capability.
I recently attended the 2013 PuppetConf in San Francisco and spent most of the Thursday in what we affectionally call the "neckbeard" session. It was the "Product and Technologies" stream and seemed to be highly tailored to the relative minority of developers at the conference, or at least the people in charge of developing and maintaining the low level detail contained in Puppet manifests. Those at one with the Puppet DSL. As a developer this seemed like the only stream I would be interested in, seeing as four of the other sessions had sysadmin written all over them and the last one seemed to be targeted at use cases for sales people. In fact, one of the other devs here at UpGuard asked us at the end of the first day if we'd been called sysadmins all day. Thankfully, I hadn't. It is also a common generalisation that Puppet is designed for sysadmins, having a model based way of defining infrastructure, as opposed to code based approaches employed by products like Chef. I went into the start of Thursday's talks with this generalization clouding my judgement.
At UpGuard we've got many decades of experience in large enterprises and are very familiar with the sorts of problems that arise in those sorts of environments. Even for those who have lived through it though, it can be hard to explain to people who haven't. That's why we require all our new employees to read The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win by Gene Kim, Kevin Behr and George Spafford. It does a great - and surprisingly entertaining - job of describing these issues. It also explains how the lessons learnt from years of Lean Manufacturing apply directly to IT. We know that no tool is a silver bullet, but if the employees at Parts Unlimited had UpGuard then it may have been an entirely different story. I've chosen some key excerpts from the book so that we could see how things may have been different.
OK, it's Labor Day weekend. I don't suppose any of you want to read about application configuration. Time to bring a bit of culture into matters then. Arts and culture are very important to us here at UpGuard. OK, so that's a stretch. We may not be brogrammers but we have a lot of Australians working here. Art appreciation often only extends as far as stubby holder (koozie) design. Having said that, and contrary to some rumors that are currently doing the rounds, we can read. I'm a bit of a Cormac McCarthy fan myself (insert disclaimer here that I was into his stuff before Oprah tarnished his cool), and my favorite book of his is Blood Meridian. I won't go into too much detail other than to say if you're into epic tales of debauchery you should check it out.
What is Quality Assurance? Well in time honoured fashion I shall quote directly from wikipedia: Quality assurance (QA) refers to the engineering activities implemented in a quality system so that requirements for a product or service will be fulfilled What does this mean for DevOps though? Well the end product is the software or application being provided so most people focus on its requirements when talking QA and DevOps.
Information Technology Service Management (ITSM) may not have the sex appeal of Agile or the buzz of DevOps, but it lays a crucial foundation for each within the Enterprise today. So, whether you consider it a necessary evil or the only way to run your IT department, here are a few resources that may come in handy.
When I attended the DevOpsDays event in Mountain View (well, Santa Clara really) a couple of months ago I started writing a blog post on my impressions. I was a bit distracted at the time though after having had a minor twitter spat with a well known DevOps proponent on the first morning. I won't go into any detail here other than to say that it was sparked after I made a comment that I felt "DevOps" vendors need to be doing more to ease the transition for large Enterprises.
There is no doubt that the DevOps movement has gone mainstream. When even IBM and HP are dedicating sites to it there is no longer any question. If we were to place it on the Gartner Hype Cycle even the most devoted proponents would have to admit that it’s rapidly approaching the “Peak of Inflated Expectations”. What does this mean for you as a CIO? Should you steer clear of the movement entirely until things calm down a bit? Not at all. Should you be cautious in your approach to “implementing” DevOps though? Absolutely.
There's a hidden killer lurking below the surface of every Enterprise IT project. No, it's not Kevin, that sysadmin who spends a disturbing amount of time in the bathroom each day. It's not even that 400 page requirements document, although from a conservationist's point of view the PM's insistence on reprinting it every few days can't be doing the world too much good. So what is it? Well, let me give you a clue:
Most Enterprise CMDB offerings are a joke. They've always been a joke. Just another white elephant system sucking time and money out of IT Budgets. What most, if not all, become are simply inventory systems. They're not even good for that half the time.
Whether a user or not, we all are familiar with the popular microblogging service, Twitter. With over 200 million users, it’s no easy task to maintain their infrastructure. It has been plagued with several outages in recent times including one this week. A product with a die hard user base can face severe backlash for even the slightest of outages.
As there's a lot of interest out there in the various IT automation tools on offer I thought I'd do a series of blogs covering getting started on each. In particular I wanted to put them to the test regarding how simple it is to go from zero to "Hello World" *. This way I get to play the truly dumb user (not much of a stretch, I know), which is kinda fun too.
You're never safe in Enterprise IT. Just when you feel you've gotten a handle on the last hot topic you're hit with another. SOA, BPM, Agile, ITIL; You feel like screaming "Enough!" but you know resistance is futile. Gartner have said it's important so you know full well that you'll be asked to "do" it by management.
Designing and building a race car using the typical lifecycle process used within an Enterprise IT department. Sounds like a good idea, no? No. It's a terrible idea, but it's fun to paint a picture of how it may work out to illustrate what goes wrong today in so many Enterprises. For this exercise I'm going to assume that there are four main groups. The design team (analogous to IT Architects), the manufacturing team (development), the safety team (security) and the mechanics (operations). Here is how things may turn out.
After taking a week off, the weekly updates are back! Here's some of the news that interested us over the past week:
Converging IT development and operations into DevOps have come a long way, and yet, the two should have grown together like Siamese twins. Developers need sysadmins as much as sysadmins need developers. Collaboration is the way winning software and infrastructure are built. And that's all the market wants: effective systems with which to run businesses. DevOps can claim substantial ground today, thanks to the persistence of players from both sides of the sysadmin-developer divide. While the segment is still evolving, various tools have been developed to help the Devs and the Ops collaborate more effectively.
Here's some of the IT news that caught our attention over the past week
Conference season continues this week, notably with Opscode's #ChefConf in San Francisco (which is going on as I'm typing this up). Here's the latest from #ChefConf and other IT news that interested us this week.
IT testing automation is an important concern of businesses, and a growing field in which IT professionals are able to make a name for themselves. If you are not already involved in automated IT testing, here are a few of the most important skills to have when holding an automation related position.
With Devopsdays London recently concluded and the Open Networking Summit having just wrapped up, here's some of IT news that interested us this week.
Here’s some of the news we came across that interested us this week The Open DayLight Project – A pretty big development for Software-Defined Networking:
It's been really interesting to watch the dramatic uptick in activity around the automation space the last year or two. I don't need to go into too much detail on the benefits that automation offers here; consistency and scalability are two of the more prominent that come to mind. What has struck me, though, is that it feels like the way that companies are going about it is missing a key step.
Today represents the hottest time to be in financial markets - nanosecond response times, the ability to affect global markets in real time, and lucrative spot deals in dark pools being all the rage. For companies who do business in these times, it is a technical arms race, worthy of a Reagan era analogy.
Those of us who haven't worked in the Enterprise probably don't know a lot about ITIL (Information Technology Infrastructure Library). ITIL may even be a source of amusement for them. C'mon, they would say, how much practical use can you get from a methodology that is defined through a set of books that is actually referred to as a "library"?
In this blog, we're constantly covering and discussing the concept of DevOps. At this point, most folks in departments related to a company's infrastructure (i.e. Developers, System Administrators) have some understanding of this idea. But where do these people learn about this relatively new and young concept?
DevOps is a concept that has materialized fairly recently, yet is already adored by so many people. Obviously, the fact that it bridges the chasm between software development and operations is pretty exciting, but there seems to be something extra that people love. So without throwing around too many corporate buzzwords (besides “DevOps”, of course), what could that extra something something be?
Software-Defined Networking (SDN) has become a hot topic of late, and with good reason. This technology has the potential to dramatically improve the configuration of networking solutions. Traditionally, data has been housed in a static fashion, with the development of network intelligence, focused on individual routers and switches. This is problematic with today's vast and ever-expanding data pool, with central automation of data management quickly becoming the ideal solution. SDN is an answer to this challenge, and a good one.
Many enterprise network workers are now adopting automation technology as a means of completing operational tasks, and of creating a more efficient environment within an IT enterprise. One of the advantages of adopting IT automation is that it helps to deliver optimal IT management, without the need for any significant capital investment.
Configuration testing should not only be an essential step in the overall development process, but also important in the process of installation of new apps for use on web and application servers. Without proper testing, apps can often fail or be open to vulnerabilities. Exposure to attack by hackers or viruses can lead to needless expenses and excessive time correcting these problems. It is not unusual for app developers to overlook the need for configuration testing. This is because they believe that using automated methods, like Chef and Puppet (or other systems that test the deployment of their products), will work just fine. They feel that by using these fully automated processes they can test consistency, reproduce outputs adequately, and determine if things are working as predicted or not. This kind of thinking can delay a timely product delivery, produce unnecessary costs, and create additional workloads to address vulnerabilities that can occur later in production.
There are two constants in the world of High Frequency Trading (HFT): massive volumes of data, and the need for programs that process this data and act on it at blistering, fast speeds. These systems change frequently as the needs of the companies using them change and as the rules and regulations of market organizations and governments change. The potential for market instability is a big concern for both companies and regulatory bodies, and major incidents occurring in the market simply due to algorithm errors have put a sharp focus on the quality and performance of HFT software. The DevOps philosophy can provide serious advantages to HFT companies, and this article will take a look at some of the main issues and concerns of the business and summarize it with how DevOps can help.
One of the best opportunities for networking and keeping up to date on all the latest trends in software development, ITIL best practice implementation, innovative methods of handling automation, new methods of tackling ever-expanding configuration drift, and learning to navigate increasingly tricky compliance issues is attending industry conferences. IT professionals focused on automation will need to read between the lines to find a convention well-suited to their interests. These events are massive in production and are usually more broadly focused.
OK, so I probably just closed out 100 games of Bulls**t Bingo in the title of this blog post but I'll stand by it. You want actual agility in what you do? You need a safety net. That safety net is automated testing.
We've made some additions to the platform that we're pretty excited about and would like to share. An even easier way to add tests, service/daemon support for the application and job scheduling for those of you that like to know that your configuration is gold even when you're not watching.
Configuration testing should not only be an essential step in the overall development process, but also important in the process of installation of new apps for use on web and application servers. Without proper testing, apps can often fail or be open to vulnerabilities. Exposure to attack by hackers or viruses can lead to needless expenses and excessive time spent correcting these problems. It is not unusual for app developers to overlook the need for configuration testing, because they think that using automated methods like Chef, Puppet, or other systems to test the deployment of their products, will work just fine. They feel that by using these fully automated processes, they can test consistency, reproduce outputs adequately and determine if things are working as predicted or not. This kind of thinking can delay a timely product delivery, produce unnecessary costs and create additional workloads to address vulnerabilitiesthat can occur later in production.
Upon the application of Chef/Puppet with a view towards the automation of system architecture, it is possible to apportion the systems environment piece by piece and start up applications in a heartbeat. This is ideally the configuration management pinnacle of achievement, encompassing a time saving mechanism, highly replicable, and with unrivaled ability to replicate.
OK, so I was supposed to be blogging this weekend but I was bored of blogging so I instead decided to combine two things I'm terrible at, illustration and comedy, and do a comic instead. I deserve to be punished for this so please, flame away :)
Why IT Automation Needs Configuration Testing
While there are many benefits to cloud computing, one of the major difficulties is migrating from the in-house servers to a cloud computing platform. Configuration issues can develop when a company does not have the right tools, and when it lacks clear communication.
There is no disputing the fact that cloud computing has led to a number of remarkable changes in the way many companies do business. Cloud-based solutions have been instrumental in streamlining IT functions and other business processes, resulting in a considerable savings in terms of time and monetary output.
Cloud CMDB - Where to Next? Cloud providers and IT shops must engage in unit testing for infrastructure management. A cloud provider is an organization that provides a component of cloud computing to businesses or individuals. The cost is usually based on a per-use model.
The Sinkhole That is Manual Configuration Testing Testing is a crucial part of software development: it involves the execution of a program with the goal of locating errors. Successful tests are able to uncover new errors that can then be corrected before the software is released.
Testing environment configurations in enterprise environments manually with scripts is difficult, just because there are so many factors involved. These can include applications, hardware, and device compatibility issues that can arise at any point within the implementation, areas which may be difficult to determine in the pre-implementation stage. Worse yet, the larger the network infrastructure, the more time consuming and complicated the test and the implementation processes are. This is when Environment Drift and Stateless Systems can come into play.
Before delivery to the intended party, a system should be tested to figure out whether the requirements set forth in the contract have been met. Configuration acceptance testing is the fundamental means to assuage all doubts that the system will fall short of its intended purposes. It is an essential part of the testing phase of the Software Development Life-Cycle (SDLC), and perhaps the most vital in its category. The way in which the components of the system interact is the sure fire means of determining the susceptibility of the system to frequent errors and ultimately the strength of resistance to its implementation. Configuration acceptance testing is pivotal to the SDLC, and as such will be an integral part of the Application Life-cycle Management (ALM) policy of any firm. It reveals any available bugs and inadequacies in the system, enhancing the process of error correction and formulation of a suitable plan of action in the event undiscovered errors manifest and affect the system after it has been implemented.
So I was stumbling around the web this morning and I found myself in the LinkedIn DevOps group. Browsing around I came across several discussions on "DevOps" tools. Now a lot of companies and projects out there use the DevOps keyword but not many of them would label themselves a "DevOps Tool". For good reason too. It doesn't take much googling to be assured that DevOps, like Agile, is not about tools. DevOps is about principles, methods and practices.
We've been saying it for a while now here at UpGuard but there's something pretty special about Australia's Wollongong when it comes to tech. Talented engineers abound in this not so sleepy New South Wales coastal idyll.
This is a pretty common response we get from people we're explaining our product to. There is logic to it but we don't believe it's necessarily reasonable. To illustrate our viewpoint on this we thought we'd paraphrase a conversation we had with a prospective client recently.
OK. Time to take a deep breath. Time to reflect on what has been a crazy six months and an even crazier week. As you may have heard, we got funded. Funded to the tune of $1.2M, and by a list of investors we wouldn't have dared to dream having on board when we started our journey with Startmate at the beginning of the year. One name in particular has been hard to miss in the coverage we've received and we are truly proud to have Peter Thiel involved through Valar's investment in UpGuard, but one investment did not the round make. Also on board are:
You've used Chef/Puppet to automate your infrastructure, you can provision your virtual environment from scratch and deploy all your applications in minute. It’s magical. You've achieved Configuration Management Nirvana. What you've built is repeatable, saves time, increases efficiency and removes human error.
Exciting times for us here at UpGuard as we've just launched and are now set up for people to request early access to our platform. We should be live for this purpose in the next couple of weeks so there is not much time to get your name on the list.