's recent day-long outage—what many tech journalists have been referring to as "Outage #NA14"—may actually end up costing the firm $20 million, according financial services firm D.A. Davidson's estimates. The untimely incident occurred just as the company was gearing up to report its Q1 earnings; luckily, $20 million is a drop in the bucket compared to $1.92 billion,'s best first quarter yet. This may be enough to pacify Wall Street analysts, but can the world's largest business SaaS provider sustain another outage of similar proportions or greater?

Named for the downed database instance North America 14, the outage not only sheds light on the high cost of service interruptions in today's digital economies, it reveals how even the most forward-thinking, innovative CI/CD powerhouses can grind to a halt due to infrastructure flaws. provided details regarding the outage's cause on its support site, describing the chain of failures leading to the downtime: on May 9th, a faulty circuit breaker led to power failures at the compute system level, revealing a storage array firmware bug that would ultimately bring the NA14 database instance offline.

Data loss and customer ire ensue, with many taking to social media to vent their frustrations.

Angry customers take to social media to vent their frustrations.
Angry customers take to social media to vent their frustrations. Source: @Benioff / Twitter.

As it turns out, unfortunate Salesforce customers active during the incident's window may have lost some data permanently. According to a status update dated May 12, 2016 20:00 UTC, data “written to the NA14 instance between 9:53 UTC and 13:29 UTC on May 10, 2016 can not be restored.” That's 3 1/2 hours of customer data, gone forever.

Interestingly, CEO Marc Benioff has brushed off the possibility of widespread customer data loss. In a private Twitter message to The Register, Benioff stated that “the amount of data lost was minimized because this was a North American instance(na14) and the loss occurred in the middle of the night.” Which presumably means that any hard-working sales professionals burning the midnight oil during that time frame are simply—for lack of a better word—S.O.L.

The Rising Cost of Glitches and Bugs

D.A. Davidson's estimates aside, only will know the true cost of the recent outage. However, we've extrapolated the cost of downtime at the world's largest etailer before: Amazon reported revenues of $107 billion in 2015, which comes out to $203,577 every minute in today's numbers—or a $2,646,501 price tag for the 13 minute episode of downtime it experienced back in March 2016. may have brushed its shoulders off for now, but it's likely to take a hit in subsequent quarters. Public service outages—like data breaches—are brand damaging events, and analysts will no doubt be anxiously anticipating the company's Q2/Q3 results to gauge the impact of the recent meltdown.

So while the estimated $20 million dollar cost for the glitch is hardly a dent in the firm's finances, no digital enterprise is too big to fail in this day and age; you may recall Knight Capital's spectacular software failure back in 2012 that cost it $440 million, driving the company to the brink of collapse. More than ever, the frequency of outages and data breaches is a measure of an enterprise's cyber resilience, or its ability to manage digital risk effectively and protect its most valuable IT assets. This entails implementing protective measures for both combating cyber attacks as well as identifying flaws/bugs before they impact the business or impede innovation and competitiveness.

Ready to see
UpGuard in action?

Ready to save time and streamline your trust management process?