Why Server Islanding is Needed
In the age of cloud computing and the internet of things, application developers are increasingly pressured to host their applications on systems that are on, "anywhere available" cloud platforms. Additionally, IT resources are pushed to ensure the reliability of remote connections, security of connected systems, and redundancy of applications ensure these systems are "as available" as local server resources. With limited budgets, and limited personnel availability, something has to give....
Lots of fancy solutions exist, but they really all boil down to two things. Is the application hosted in the cloud (ie: remotely to operators) or locally? (ie: local to operators). Both have their advantages and disadvantages, but unfortunately its usually one or the other.
Cloud systems are rarely employed as the "only" system in critical infrastructure environments, due to the obvious pitfalls of complete loss of local control in case of remote connection failure. Similarly, its rare to see a facility use an "on prem" only server environment, especially in a remote area, since there can typically be much less IT resources available.
Yet, many facilities need the ability to "island" themselves locally in case of problems with the greater network. If a critical infrastructure server system has an always on network connection to the internet, then the potential for a security breach isn't an "if" question, more than a "when".