DAVID L. PETERSON

Taking Full Responsibility

In any online system, there is always a chance there might be a break in the service. A service interruption can be related to a hardware malfunction. It can also be a software bug, or perhaps the result of malware or another cyber security event. Regardless of the source, a service interruption negatively affects the end users of that system.

In a situation where the online system is online banking, the duration of the event is critical. If an online banking system is down for 30 minutes at 2:00 a.m. on Saturday, there may be little-to-no inconvenience for anyone. Thirty minutes at noon on Wednesday, though, would be a different story. Thousands would be affected. But, if the outage was limited to only 30 minutes, the inconvenience might generate some irritation, but have no real lasting affect. Outages that linger can convert irritation into negative action, where individuals might be so upset with their bank, they elect to “vote with their feet”.

Ultimately, the negative aftershocks of an online banking outage come down to how the financial institution in question handles the PR. When a bank doesn’t disclose that a security breach occurred until many months later, that does not breed trust. Even in a non-cyber security event, the speed and candor with which the entity claims responsibility is a mitigating factor. I think most people understand it is impossible to 100% guarantee no defects. The correct formula is to be straight up about what happened without delay, admit fault, express empathy, and promise to make it right.

I was made aware of a great example of an FI doing just that. My sister Linda has an account with BB&T, and, after an outage, she received this letter:

Dear Linda,

Please accept my sincere apology for the service interruption that began on Thursday, Feb. 22. While our systems have substantially recovered, we know this was a major inconvenience. We are sorry for any frustration or embarrassment this may have caused, and I’ve also shared those sentiments in this personal video message.

The outage was due to an equipment malfunction in one of our data centers and affected many of our systems. Fortunately, we have no reason to believe it was related to cybersecurity. But we’re still working with some of our clients to guide them through a few lingering issues.

We extended our hours on Friday and Saturday to ensure clients like you had an opportunity to visit one of our financial centers and speak with a BB&T associate about any questions, challenges or concerns you had. I hope you were able to take advantage of that opportunity if you needed immediate assistance.

Rest assured if you incurred any bank fees directly related to the outage, we will certainly waive or reimburse you for them. And if you still have any issues you believe are associated with this event, I encourage you to reach out to a BB&T associate at your local financial center or call our Client Care Center at 800-226-5228. We’re here to help.

Please know we are carefully reviewing the cause of this malfunction and taking steps to ensure this doesn’t occur again in the future. We are thankful for your patience as we continue to work through this difficult time and grateful for your business.

Sincerely,

Kelly S. King

CEO

Note this letter is from the CEO of BB&T, not some downstream executive or PR person. I find this letter meets all of the requirements for how these breaches should be addressed. It should also be noted that Mr. King is not receiving universal acclaim for taking responsibility, expressing empathy and promising to make things right. Ultimately, he is responsible for the whole organization. And if it’s true a hardware failure caused this, then it brings into question why a single point of failure would be allowed in a system operated by a top 100 U.S. institution. I would be glad to provide a copy of my book Grounded to whoever at BB&T would be in charge of ensuring that mission critical systems have backups for backup systems.  

Are you prepared to address a system failure with quick, accurate communication that meets the requirements for an effective apology? If not, you need to address this now. Then work diligently to ensure every mission critical system has more than one backup failover so you don’t ever have to make that apology.

Leave a comment

Your email address will not be published. Required fields are marked *