31 Dec 17

    Multi-cloud fallacy

    unlokq

    Keeping your cloud options open appears to be an admirable thing to do since we have been burned previously by vendor lock-in many times. Moving to public cloud appears, at first, to be the ideal time for us to take stock and avoid the same pitfalls. But is it really the same problem?

    Apples vs. Oranges

    Historically in enterprise organisations, architectural decisions have been made on [falsely] elongated timescales. This has allowed vendors to sell licenses for longer periods of time by offering heavy term discounts, in a self-fulfilling cycle this has exacerbated the tendency to lengthen the period for architectural reviews. In the consumption model introduced by public cloud we are able to change architectures on a far more frequent timeframe dictated by movement of the business in response to customer choices; beware the enterprise architects and subordinates that are willing to sit on their laurels and defend the old school position. 

    We are dealing with apples and oranges with the ‘so-called’ lock-in since we have moved from a financial lock-in to a functional one.

    Unnecessary Complexity

    To be able to take advantage of any multi-cloud system a configuration (application or otherwise) must be able to run equally well on any provider environment. For this to be true then the underlying environment must be an ‘absolute’ commodity, or be very close to it. If this is not the case then the effort (read cost) of maintaining the configuration such that it can support non-identical environments may be more overhead that what is saved by moving to a cloud provider. The only scenario that this may be the case is when you run a containerised application, where any orchestration layer is not provided by the cloud vendor.

    Spiting your face

    There is a certain futility in choosing to multi-cloud as it necessarily means that you are removing your ability to use any chosen vendors higher level functions, limiting yourself to close to commoditised infrastructure services (possibly up to database). Higher level functions are certainly not commodities and the ability to shift workloads between providers would definitely require significant effort. There is a reason why cloud providers are spending far more effort on the higher-level offerings since they ARE the functional lock-in mentioned earlier. If you don’t believe me, perhaps you can remember the Oracle on Itanium spat between HP (when it was one company) and Oracle, there would be very few customers willing to give up their oracle database because they only wanted to run Itanium servers, though many would trash the server platform to maintain the Oracle database; higher level services ALWAYS win out.

    Choose wisely

    Do not be tempted to replicate your on-prem systems and architectures for cloud-based compute. This will not be a good use of your time and money. Follow these 3 guidelines;

    • Consider your data, how it is used, stored and transferred, and with this knowledge make an informed choice of the highest level services that will provide the functionality that you need
    • Make decsions on a per (business) service basis in conjunction with the owner of the service (Line of Business). They, after all, have a vested interest in the choices made
    • Keep pace with the changes made by your chosen vendor and optimise accordingly
  • Importantly! Keep abreast of competitive offers. You CAN still change your mind, but this becomes a cost exercise. Development cost vs savings