Cloud
offerings have matured and today, with Virtustream, there is a reliable offer
in the market to bring the benefits of cloud (cost savings through consumption
based billing, agility and fast time to value) to business critical
“client-server” applications like SAP Business Suite and SAP HANA.
To those
customers that have evolved their IT department focus from being an
Infrastructure engineering and operations group, towards becoming a service
broker of IT services managing providers in alignment to business requirements,
Virtustream provides a fully managed Infrastructure as a Service in the Cloud
(Managed IaaS) for enterprise mission critical applications, providing the
benefits of control, security and accountability of IT outsourcing together
with the cost savings and agility of cloud.
As SAP
customers plan to evolve their system landscapes to adopt SAP HANA, and suffer
the challenges of defining the best architecture to support it, the best
technical migration plan, and the best operational practices to sustain their
business Service Level Agreements (SLAs), Virtustream presents the broadest
experience in the market migrating and managing SAP HANA in the Cloud, and
provides a one-stop-shop for all technical needs in regards to SAP HANA:
architecture advisory, technical migration, upgrade services and application
managed services.
In this
blog post I’ll share my journey over the last 6 months since I joined
Virtustream, through the discovery of the challenges customers experienced in
adopting cloud and the value they saw in Virtustream, by enabling them to
leverage the benefits of Cloud (unprecedented levels of cost saving and agility
while containing risk) to the area that today consumes the majority of their IT
budgets: critical IT systems of record.
Lots of
friends keep asking me “what happened, as there’s a long time you haven’t
blogged”?
Simple
answer: have been working my head off! The demand for Virtustream services in
Europe has been massive, and I haven’t had brain for anything else than
responding to that demand and assist Virtustream in growing its presence in the
European market.
The ride in
Virtustream has been amazing! The traction its value proposition has in the
market is unparalleled to anything I’ve experienced in my almost 20 years in
the IT industry.
As my
colleague Mohammad Zaman says, “its jaw dropping”.
One thing I
have in excess at this time is topics to write about, and what I have in
shortage is the “mind time” to write all about it on my blog.
So, as I
fly to attend SAP’s premier conference in the US – SAPPHIRE Orlando 2016 – decided
to put my blogging back to shape. Wanted to share with you some of the
perspectives I’ve been fortunate to learn from customers all across Western
Europe while I discussed with them the Virtustream value proposition in the
context of SAP HANA adoption.
And the
more customers I meet, the more I realize how amazing it’s the relevance of
Virtustream value proposition towards simplify IT infrastructure consumption to
support SAP HANA adoption scenarios.
Let me then
step back for a minute and build some context for us more “techies” on IT
strategy concepts which I believe are fundamental to fully realize the
exceptional moment we are living in the life of SAP customers.
In this
blog I will focus on covering “why Virtustream”, and will leave “the what” and “the
how” to later blog posts.
Most Businesses want the
services IT infrastructures provide, they don’t want to own those assets
I always
insist on this as one fundamentally important variable forgotten by many of the
technologists at “IT consumer organizations” and “IT supplier organizations”
alike: what is the role businesses expect of their IT.
Many IT
decisions are influenced or led by “technologists” members of the “IT consumer
organization”, being them individual contributors or middle management in
architecture, security & compliance, engineering or operations departments.
And it is a
good thing that this happens, as most business users are not experts on
technology, and can easily buy into something that, serving their business
goals at a first sight, can bring hidden challenges that only later when those
solutions are faced with the need to sustain SLAs and of change and evolution,
are truly uncovered.
So, IT
departments, have a chance today – more than ever – to be a key business
partner within their organizations, by translating business requirements into
IT specifications, and evaluating whether the proposed solutions by the
multiple “IT supplier organizations” will respond to the expected needs of the
organization across its expected lifecycle.
On the
negative side I also see a lot of people in these organizations that completely
lose sight of their organization’s business goals, and that IT is there to
serve the business and not to serve itself becoming a resource eating monster
with a purpose of its own that gains more importance than the basic principle
of serving the needs of the businesses they support.
Examples
here is when organizations over-engineer, over-specify, over-complicate
requirements, to a point that no one else is capable of provide them, leading
to a situation where that IT organization becomes a full breath IT engineering
group (many times not bringing additional benefits to business by doing so).
This problem is even bigger when many times these same organizations
under-deliver against those same specifications.
One concrete example
I have was an IT organization issuing an RFP demanding for near zero RTO and
RPO, and when I meet with their CIO, found out that on the last DR test there
was missing data, a number of failures to execute the DR test plan, and that
the systems took more than 6 hours to come back online. Once I explained to
this leader what Virtustream as a cloud provider could offer standard out of
its catalog, and being way better than his reality, he overruled his IT
organization and adopted our proposed standard.
Some of the
biggest wastes in IT (as all of you experts in Lean IT know) is over-delivery:
meaning, providing capabilities or services in excess of what is really needed.
Also, fighting these wastes is a never ending story for Internal IT
departments, as the simple fact of building your own instead of consuming what
you need drives an additional set of wastes like inventory, waiting and
non-utilized talent. Again, this is even more serious as reality then does not
keep up with the defined specifications.
In the past
this has happened a lot, as there weren’t providers in the market for many of
the solutions those organizations needed, which lead to doing in-house – and talking
about business critical systems – doing it safe. This is one of the reasons why
many organizations challenged with the massive ERP adoption projects of the
90’s developed massive internal IT capabilities which today represent a
significant cost burden and a source of lack of agility.
But is it
still the case that there is no other way to sustain those business critical
systems than to engineer every bit of IT infrastructure yourself, and run it
100 under your control of every single bit of technology component on the stack?
And insist
on saying so based on a simple aspect of life: things evolve. They are not
static and new possibilities have come to market in recent years that have
changed this long time paradigm.
So, if you
look at the asks organizations place on their CIOs, and you look at the
offerings in the market today, once thing shouts out: things that organizations
needed to build themselves as there were no reliable providers, have today a
reliable and integrated offering in the market.
A basic
principle in business management is: build and engineer things yourself if
there is no reliable provider for what you need.
Bringing
this to IT, what I hear from many business decision maker is that from their
perspective, what is critical for them is the service that IT systems provide
to the business.
Some go to
the detail of mentioning that “application development” has become a core skill
on the organization as that is what enables the organization to transform the
data they have stored into relevant business insights.
But when
asking whether they consider designing, building and operating the
infrastructures where that data resides and is processed is a critical skill,
most already admit clearly that it is not. It is a “necessary evil”.
And in
simple terms, this is why cloud computing has shaken the market and it impact
continues to grow.
IT
infrastructure departments are evolving from IT Systems Engineering to IT
Service Brokerage
And we owe
it to the increased maturity of the cloud offerings in the market this
understanding that designing, engineering and operating IT systems
infrastructures is not core, as many providers in the market have been able to
make available infrastructure as a service, with a cost and agility
unparalleled in internal IT departments.
This advent
of cloud computing has built many clashes in internal IT departments.
For
example, some business leaders and “early cloud enthusiasts” got so excited
with the golden shine of cloud that they started to procure IT services in the
cloud directly without any involvement of their “internal IT” departments.
This was
due to the resistance of internal IT in transforming to service brokers and
block all cloud initiatives with “over-engineered requirements”, not
considering that business needs a “good enough” solution that balances cost,
time to value with acceptable risks. This led to some starting to call internal
IT as “Legacy IT”.
On the
other side, the fact that many of those cloud offerings did not provide the
security, compliance and reliability critical business applications need, led
massive disruptions in business operations shadowing the early excitement with
some antagonism. Also, many found that
some of those cloud offerings were not sustainable at scale, and represented
significant management challenges as their services were provided fully
unattended implying internal IT building additional scale to manage it.
Within all
the instability we saw with cloud adoption, which I see as typical of every
major structural change, a lot of new “buzz” topics and words were born, but
one thing is a fact: internal IT had here a unique chance to leap into the
future by affirming its role in being the IT experts and partners of the business,
reasoning between real business requirements, and the offerings provided by the
emerging cloud providers.
Some
organizations started to do that transformation, but as in all structural
changes, as important as the change in the organizational structure or the
mission, the most important challenge is the evolution of the mindset of the
professionals tasked with the execution of this new strategy. So, I would say
that many of the organizations are still half way as many of the technical
stakeholders still maintain the “over-engineering mentality” they had once they
built everything themselves, still driving over-provisioning of cloud solutions
to their businesses.
Regardless
of these organizational challenges, there were also concrete technical
challenges faced by the early adopters of cloud computing and that got settled
in the mind of many professionals. These were indeed hard learnings from the
first experiences, but again, the world evolves and changes, and new cloud
providers are coming to market making a business out of the learnings from
these experiences.
Some of the challenges felt then, that lead now to the appearance of companies like Virtustream to solve them, are:
- Data Integration Challenges: now that my data is in the cloud, as business applications need to interact and exchange business data, how can I ensure security of that data across borders?
- Process Integration Challenges: as most critical business processes span multiple applications, how can I avoid users needing to learn multiple interfaces with the impact that has on error occurrence and loss of efficiency?
- Migration assurance challenges: one of the news of cloud was the advent of what are “internet native apps” which are smart enough to manage the infrastructure underneath. But most organizations have done massive investments in applications that are not “that smart”, and when migrating them to the cloud, many simply haven’t been able to perform against the necessary business standards (when talking about business critical processes, application availability is as important as application performance). How do I migrate to the cloud?
- Vendor Lock in: now that my data is on the cloud, what chances do I have of changing supplier? Many organizations found out that some of their early choices led to the need of “heavy lifting reengineering and data migration” both to get into the cloud, and to “get out” of a certain cloud provider.
- Data availability challenges: what if my provider has an interruption in service, or if it goes out of business completely? Do I have a backup plan? Here aspects like vendor financial robustness have come to become front and center on cloud adoption decisions.
- Data security challenges: will my data be safe in the cloud? Does the cloud provider have strong security governance including a DR and Business Continuity plan? Here aspects like the cloud node architecture and operations standards have become a key question of concern especially when discussing hosting business critical systems in the cloud.
For many of these aspects there are technical responses enabling either simpler or harder solutions.
These aspects have made it evident that organizations needed IT to take a different role from the one it has taken over the last 20+ years.
As the most
advanced stakeholders in “Internal IT” realized all the changes happening in
the market, and understood that being called “legacy IT” was not a good thing,
many organizations evolved into becoming truly business partners and being the
Information Technology experts inside the organization, to which business users
could rely on to assist them in making smart cloud choices without falling into
the many traps of the past (being the ones I’ve just mentioned before a very
small list of examples).
Cloud
has created a clash and a new amazing opportunity for Internal IT
But the
functionalities of new cloud providers, enabling interfaces to which
applications could connect to automatically manage resources and
infrastructures were there to stay, and to many “consumer facing” applications
like internet portals and many others were perfect to leverage these new
capabilities.
As
developers increased in importance in organization, as they were the ones
enabling the collection of data, its analysis, processing and transformation
into new ways of operating business, this new cloud reality could not be
ignored anymore and there was no way back on its adoption, because it simply
made sense.
The clash
then came on how to manage the balance between the existing “legacy IT” that
supported most of the critical business processes, and probably was not a good
fit for this “DevOps” cloud reality, and the management of the new “web-scale”
reality with new applications based on Big Data and Internet of Things.
Here
emerged the concept of “Bi-modal IT”, that meant your IT department needed to
work on 2 realities: the business critical / client server reality where most
critical business processes were based upon (where your record of business was
registered), and the new reality of DevOps with a new agile app development
reality leveraging the automation and simplification offered by the first cloud
providers.
And one
parallel idea that has grown with this concept of Bi-Modal IT was that the
critical systems, called in the SAP world as systems of record were not
suitable for cloud and should remain on-premise operating in the model they
always did: being architected, engineered and operated with internal
capabilities.
This idea
has settled as the early cloud experiences of CIOs based on the “most popular
public cloud vendors”, sweetened their mouths for the cost efficiencies and
flexibility, but did not respond to the requirements of these systems of
record, that not having been designed for the new cloud reality, having a long
life in organizations as those core business processes do not change every day,
had some very hard learned experiences that made them more careful and put in
place strong governance in regards to cloud adoption.
Within those learned experiences, when talking about cloud for business critical applications, the following aspects emerged as critical:
- Where will my data reside, and who may have access to my data? Here the “Snowden effect” has raised important levels of concern among European, and in general all non-US companies, as most early days public cloud providers were US-based companies operating out of the US, or with US based operations.
- Who is the provider, and how robust financially is he to ensure that I do not risk losing access to my systems? When talking about my critical applications, I cannot put any data in a provider that doesn’t demonstrate the financial robustness to ensure that shutting down their operations due to bankruptcy from one day to the other will not happen.
- How do my move my applications from current state to future state? With critical applications, performance is as important as availability, and organizations cannot afford to throw away all they have built over the last 20 years without blinking an eye.
- How so ensure interoperability of applications in the cloud with applications on-premise with security and performance? Understanding that no business application lives alone, but rather as a member of a networked and interdependent world of applications poses challenges on business process performance and data storage, processing, transmission and integration security that need to be tackled with.
- What guarantees will I have on application performance and availability? When talking about critical applications, it is not enough to say “trust me, I know what I’m doing”. Aside from having to provide contractual guarantees (and demonstrate having the financial robustness to honor those guarantees), you need to demonstrate how you do things, and enable full transparency on how “things are done”. Who will operate my applications on the cloud? Being understood that application development and data modeling is increasingly critical for organizations, managing the data platform and data availability in the cloud world is a challenge, as many of these business critical apps are highly dependent on how the infrastructure is architected. So managing systems on which you have no visibility on the most of the layers under the data can reveal itself challenging, or even frustrating. So most organizations want increasingly a “managed cloud platform”, requiring the cloud provider to understand the applications to manage them and provide SLAs at the application level.
All of this challenged traditional cloud providers.
The
shift in cloud adoption happening right now: form on-demand infrastructure to
managed cloud services
As
mentioned above, the fact is that the early providers of cloud created an
experience on IT buyers and key business stakeholders that is here to stay
despite of some more challenging experiences, and that is now setting
expectations for the rest of IT.
Then the
new question most Business leaders are placing to their IT organizations is:
why can’t I have the benefits of consumption based billing, agility and fast
time to market, together with the control, security, compliance and robustness
that IT Outsourcing once promised also for my systems of record where in fact
I’m today spending the biggest slice of my budget?
As the UK
based IT Analyst “451 Research” has found (image above credited to 415
research), there is indeed a shift happening right now in terms of customer
focus for Cloud adoption, where they are evolving from the experiences gathered
on the first phase of cloud adoption led by 1st generation global
cloud providers, to a second phase of adoption where customer are shifting
their focus from just having on demand infrastructure, towards more
sophisticated offerings reaching further into the core of the business.
In this new
phase of cloud adoption, customers are asking for managed cloud environments,
enabling them to truly evolve towards having their internal IT departments as
service brokers specialized in IT, and are pushing the market towards providing
them with offerings that respond to the needs of business critical applications
in a way that the 1st Phase cloud offerings have not.
This second
phase of cloud adoption denotes both the emergence of a new type of cloud
providers, and the increased maturity of consumer organizations which have
navigated through the organizational and technical challenges discussed above
to take the principle of consumption based IT to the next level.
The
birth of a new leader: Virtustream, the enterprise class cloud!
Here we
need to give enormous credit to Virtustream co-founders for having started this
business when no one else was thinking of it: Rodney Rodgers, Kevin Reid, Matt
Theurer, Sean Jennings and Vince Lubsey.
No better
way to understand why Virtustream is such an exciting company, than to listen both
to the co-founders sharing their reasoning for creating this company in 2009,
and to customers that have been early adopters on this new phase of cloud
computing.
All of Virtustream
co-founders coming from extensive consulting experience with Global 1000
companies, saw an opportunity of bringing all the benefits of cloud to core
business enterprise applications that have not been built native on the cloud,
but that consume the largest part of current IT budgets, and are in desperate
need for cost savings, and greater flexibility of consumption and provisioning
that the 1st phase of cloud brought to businesses.
Being the
background of some of the co-founders of Virtustream SAP Consulting, and being
SAP the most critical business application in use with the large global
organizations they consulted to, solving the engineering problem of running
these IO intensive mission critical applications in a multi-tenant cloud
environment, while ensuring performance, security and availability, would open
the door to run almost any critical application in a cloud environment.
So, SAP was
the first application Virtustream decided to tackle and run in a “next
generation cloud architecture and service model”, designed and built for this
new phase of cloud adoption.
And this is
the equation Virtustream’s co-founders were able to crack:
- Running large scale-up enterprise critical applications on a multi-tenant cloud;
- While ensuring application performance;
- Complying with the highest security and compliance standards in the loud industry;
- With application level SLA’s for performance and availability;
- And a consumption based billing that drives significant cost saving to enterprises both on assets consumption, but also on managed services;
- Providing a one-stop shop to plan, migrate and operate critical applications in the Cloud.
If you
isolate each of these aspects, you could say: no big deal others have done it
before.
But try and
put all these aspects in a single sentence, and you’ll realize that today no
other provider can live up to them. At least, I do not know any other global
provider that does it.
Conclusion
Virtustream
was created in the context of the shift of cloud adoption towards the second
phase (Management of Cloud Environments), and targeted specifically to the
needs and challenges of this second phase, where IT consumer organizations have
already made their mind in becoming an IT services broker, and through the
learnings gathered from the first phase of cloud adoption, are now looking to
expand further into the core (and to business critical client-server
applications) the adoption of a cloud consumption model.
What makes
Virtustream unique is a combination of its cloud nodes architecture, its
proprietary software (for cloud management and security&compliance), its
billing model, and the range of specialized services it offers (from advisory
and migration services to full managed services up to SAP Basis level).
With 7
years of life and a very successful track record of migrating SAP applications
to HANA on the Cloud, Virtustream sees itself uniquely positioned to simplify
the adoption of SAP HANA.
Already
going to long, and my plane is about to land, so let me stop at this point, and
leave you with this teaser. Looking forward to write further on “the what” that
makes Virtustream special in a next blog post.
As a final
word, you that have been following my blogs for some years have seen me in the
past writing about running SAP HANA on internal IT. You can see that my
reasoning have evolved, and it has evolved through the learnings of listening
to the Line of Business leaders on some of the largest organizations in Europe
and how they see Internal IT coping with the new challenges their business
face. Already starting to gather some ideas around it. Stay tuned!
Disclaimer:
my blog is a personal expression of my learnings and experiences, aimed to help
others build ideas on their own in regards to the latest trends and realities
in the SAP infrastructure market. Its contents have not been reviewed or
approved by my employer, and so are not endorsed neither reflect it ideas at
any given moment.