System Garden - Blog - Is There an Ideal Enterprise Cloud Configuration?

Is There an Ideal Enterprise Cloud Configuration?

Find us on:

For most organisations, it is essential to have a relationship with more than one provider to keep your options open. The reasons are numerous. For enterprises, it may be a company directive to minimise supplier risk. If data is a concern, then the location and processing locations are important. Over time, you are bound to have disagreements with vendors or find price opportunities with others. Alternatively, some developers will have the desire to exploit new technologies like Deep Learning using specialist hardware that is only available on specific platforms. The needs of both developer and organisation drive a multi-cloud approach.

Meet the Customers

In general, each application will demand its own architecture and cloud configuration. It will have its individual needs and unique way of returning value. There will be a drive to use technology that is the most appropriate fit and closely matches the skills that are available.

Therefore, an enterprise will need to create a flexible platform that can deliver the demands of each application or service and go on to embrace future requirements, even if they not yet known. The substantial technology-churn in the industry, illustrated by the proliferation of domain-specific and high-level languages, means there will be less sharing between projects unless specifically designed. Moreover, it is likely that clouds will diversify the styles of computing needed, certainly in the medium term.

However, we still have to consider that many applications moving to the Cloud will be straight redeployments of existing systems with no cloud-native features. In short, the demand is likely to be diverse, more complex with less sharing unless you intervene.

There will be new restrictions too. It is expected that you will need to track where your data is and what it contains. There will be a need to understand and review the cost of operations, and correctly embrace internal application access control, which is likely to be a new level of involvement.

It is clear that a level of meta-management will be needed to get the most out of our cloud providers and to track the most effective use of deployment. It will also be necessary to understand how future security breaches can be tracked and stopped. Multi-cloud management is an area crying out for tools, once the market and problem domain settle down.

In conclusion, we are faced with a more complex set of architectures that over time could become less understood by technology management. However, I believe that the benefits of cloud will overcome these concerns: a faster time to market and tying up less capital is a desirable prospect to customers and business. It is down to us to put measures in place to keep our eyes open as walk into the cloud.

Home Base

The first technique that should be embraced is to remove lock-in wherever possible. It’s not rocket science: use abstraction interfaces to buffer proprietary APIs and use generic open source technology where possible. Use deployment tools like Ansible, Puppet, Chef, Salt, and Terraform to keep deployment fast and scripted. Use Docker and it’s associated container technologies to keep deployments nimble. Ultimately, it will lower costs, increase agility and maintain options for as long as possible.

In this post, I suggest the use of a central hub, but there are potentially other models that may fit different shapes of organisations better. However, this is one of the most generic models and a good example of what is possible.

The central hub should be a neutral place — a home base. If you have sufficient scale, you should have your own private cloud facility that changes technology under your control and not too quickly. It will hold the core of your data: the most mission-critical part. If you have good costs, it may be that the home base takes much of the core load; with bad costs, it may hold the strategic parts only. If you have an internet facing part of your business, I suggest this should be remote from the home base and separated by several access control points.

Smaller organisations may be able to emulate the home base technique by adopting a lead cloud vendor to do the similar thing.

Messaging: Joining Up the Parts

A multi-cloud infrastructure should have universal connectivity from your home base (or lead cloud). As well as IP connectivity, it should use a general purpose, publish-subscribe message system to connect up the limbs of your application architectures, allowing asynchronous delivery of events. It should relay both data and control between applications and also meta information relating to how the application is running. It should be flexible enough to cope with large volumes of transfer and sufficiently decentralised to be able to scale out.

Each cloud vendor has a favourite messaging system, for example, Pub-Sub used in Google. It is likely that most or all the vendor’s services will be integrated into their messaging system, and these will need bridging into the standard you have selected for your organisation. Your goal should be a single set of queues that consolidates all the activity and gives the most flexibility to data connectivity (as capacity allows). This is a key piece of infrastructure that will pay dividends in the future.

Messaging should embrace entitlements to ensure that data in motion respects security over application boundaries. Keep data types to text or mime types to allow simple handling, visualisation and automation of messaging.

There is one alternative to owning a central message technology: pick one vendor to do it for you. Keep the principle of bridging and even ask vendors to deliver it for you; it will save time, possibly money, but will potentially lock-in your future to a single company and return you to risk.

As the cloud becomes more specialist over time and delivers unique hardware or represents specific jurisdictions, providers may diversify and grow in number. For example, specialist deep learning hardware or unusual locations for specialist markets (for instance, privacy-centric Switzerland). By using a hub model connected with asynchronous messaging, it becomes easy to extend to other vendors and to replace or decommission them over time.

Universal Monitoring abnd Observation

Each app or service needs to report back its status and progress. The obvious method is to reuse of the messaging system to relay this information back to a central point.

Typically, the sources of data are the many log files and individual messages generated by core and dependent services that make up your applications. Fortunately, forwarding time critical non-transactional data is friendly to pub-sub message architectures. The instrumentation data can be connected to any event collection system that you have and sent to a central log processing system, such as Humio, ELK or Splunk.

Whilst each of those systems also has log collection and forwarding (great for flat IP networking) hitching a ride on a messaging system will route a single path through firewalls and navigate a cloud of arbitrary complexity. In some solutions, one may just bake-in a pub-sub conduit as part of log collection and get the best of all worlds.

Data Tacking

With the rise of GDPR in Europe, it’s important to know where your data is. In a multi-cloud implementation, your data could be in many places and many applications. A central point in your organisation must be able to coordinate the locations, flows and resting points for data, not just in each applications team. These could be Data Officers with DevOps engineers helping in implementation, although every organisation is likely to have its own view.

When requested, the first obligation is to find the requested data across applications and be able to talk about it. Where is it? How many copies are there? Do you still need it all? An understanding of how each cloud vendor treats data at rest and in motion is required at a detailed level.

Country locations are also another layer of complexity. Individual jurisdictions will assert their right to have their subjects’ data in a particular place when used for a particular purpose. Fortunately, not may demand this yet and most of their concerns are around identity or financial data. However, one can see complexity increase as data becomes more important to everyday life and as political alliances change over time.

Redeployments

A mono-cloud strategy is unlikely to remain over time: there will be too many pressures to try things out or to chase lower costs! Therefore, part of an enterprise cloud approach is how it will morph over time and how to keep a transition’s cost and disruption low.

While this is likely to be an architecture responsibility, but it needs a team to have the responsibility to track the cost of the deployments and represent this in the context of applications. It is likely to be part of the dataset maintained by DevOps engineering. When a significant discount to the current deployment becomes available, applications should consider migrating to the new scenario and claim a decrease in cost. A radical suggestion is to make this self-governing by giving bounties to the DevOps engineers for all the money they can save!

The Only Constant is Change

By establishing a hub cloud model, allowing for specific vendors to contribute their specialist services, one ends up with a system that embraces change and is not hampered by it. Virtualisation, containers, and deployment script techniques help make deployment of applications more fluid. This needs to be backed by security control, data transport, messaging and monitoring infrastructure, which will allow applications to continue operation without significant interruption.

Finally, a good dose of cost modelling needs to be factored in to ensure that data taxes do not crush redeployment. This needs to be factored into each application’s non-functional review when deployment is considered.

Embracing a flexible ‘meta-infrastructure’, the ability to connect clouds together is going to be vital to long-term exploitation of future capabilities. Agility needs backing with infrastructure.

About the Author

Architect, PM and Strategist. Follow me on Twitter @nigelstuckey

System Garden

Agile Infrastructure for Enterprise DevOps Design from diagrams, document and deploy to your cloud.
systemgarden.com, Twitter @systemgarden 

How to Avoid Cloud Lock-in

The cloud market is vibrant and competitive. However, there are circumstances where it can look similar to the packaged database market and poses a similar risk of locking-in customers to their ...

Previous story

Infrastructure of the New World

Embracing the foundations of new enterprise computing

We are currently in a slow rebellion for ‘New World’ Infrastructure, turning over the traditions of enterprise orthodoxy. Next story

© 2018-9 - System Garden