2014-11-11

First feelings out of SAP TechEd && dCode Berlin

This is the early morning of the first day of the conference, and just got out of the key note session.

If anyone had any doubts, it became crystal clear: the future of SAP Innovation will be driven by SAP HANA.

With SPS07 HANA has really became a mature platform in terms of datacenter integration options, and all the innovation driven by the possibilities it brings to business, make a overwhelmingly compelling case for companies to start adopting HANA within their datacenter portfolio.

So, I truly believe the question today is not on "whether companies will adopt HANA" but more a matter of "how and when" will companies bring HANA into their datacenter portfolios.

A couple of things caught my attention, and let me share those with you.

First the affirmation that HANA and Fiori will represent the full reinvention of SAP applications. The level of possibilities offered by HANA together with the user experience improvement offered by Fiori will completely reshape SAP applications as we know them today, so every day customers wait to start this journey, will represent more time without seizing the benefits this simplification could be bringing to their organizations.

The next one is that SAP is very serious about cloud. What I got is that SAP is playing on the full front: IaaS, PaaS, SaaS as well as making all these 3 models work on a fully public model, on a fully private model as well as on a hybrid cloud model. I understood that on the IaaS and PaaS spaces SAP is opening for the partner community to contribute, and considering the dramatic reductions of data footprints being observed (one example showed a 10TB BW coming to a 800GB database on HANA), running some of these applications fully virtualized (using virtualization technologies like VMware), will bring the benefit of flexibility, enabling customers to operate not only on a Hybrid Cloud Model, but most of all, giving customers the flexibility of changing providers and moving between multiple PaaS and IaaS providers. Working on a hybrid model with some components subscribed as IaaS, others as PaaS and others as well as SaaS will definitely be a possible model going forward, enabling each customer to chose the right option for him, based on his strategic, financial, security/compliance and technical constraints. These are exciting times full of possibilities!

And the third topic I would like to highlight is how HANA platform and the Fiori user experience are enabling the transformation both of business models, as well user productivity. The example of s-Fin and the "Big Washroom Data" really provoke users and business decision makers to start thinking on a "what if" perspective, and start disrupting their own business to enable them to leap forward on this new hyper connected world. Getting the information to the fingertips of business users, enable them to manipulate and analyze it on the fly, along side with the possibility to integrate in real time "machine data" and act upon it in real time, open possibilities up until now unavailable to SAP customers.

These are exciting times for all of us technologists working on the SAP world, and more than ever, skill sets that span technologies, and bridge business as IT, also connecting multiple technology disciplines to get them all working together will for sure see increased demand, as on this new world, a silo view won't do it anymore.

Stay tuned for more news and perspectives coming from my learning at this SAP TechEd && d-Code 2014 in Berlin.

2014-10-31

ScaleIO - A New way to configure storage for SAP HANA?

Just a couple of days ago, EMC's ScaleIO got to be supported by SAP to be used in productive environments both for SAP HANA and traditional SAP Landscapes.
SAP's support statement can be found on the following SAP Note: https://service.sap.com/sap/support/notes/800326

For many, this will have little meaning, but for some this will be a major breakthrough.

Let me state from the start that it's my true belief that for large enterprise customers, having a SAN based storage config is the best way to deploy SAP Systems, an in particular SAP HANA. My posts on this blog explain why i believe that, and it's a factor of performance, TCO, flexibility and operations aspects.

But the fact is that mainly in the service provider business, some organizations are looking to "white label" servers with direct attach storage, forming large pools of resources, as a cheaper infrastructure option in this very competitive "public cloud" world.

And in that case, you will want a couple of things:
  • Performance, of course;
  • But also redundancy;
  • Scalability;
  • and ease of management.

In the SAP world, so far, the only way to get the redundancy and scalability was using IBM's GPFS, which is a clustered file system providing many redundancy and scalability characteristics.

There are lots of merits and value in GPFS, but I think GPFS is the wrong tool for environments like databases (for example see my blog post on block vs file here). Also the feedback I'm getting from customers I meet using GPFS, say it's a nightmare to manage, implies unacceptable downtimes to scale, and implies as well operations costs that are not the most affordable in the market.

Well, ScaleIO is a technology that allows to aggregate the direct attach storage of many servers into a single "virtual SAN alike" block storage pool.

And as I've written on an earlier blog post, for database workloads, I believe that "block" access to disk is the right option. After all the databases read and write blocks, and the same happens with SAP HANA.

So, what ScaleIO allows is to:
  • get redundancy into direct attach storage my ensuring a copy of each server's data on another server;
  • get awesome performance by distributing the data of a filesystem attributed to a server accross all servers in the pool (all application servers are at the same time storage servers for their peers), which makes this solution to perform better the more nodes you have in the cluster (of course LAN network planning plays an important role here);
  • scalability is natural in the design of ScaleIO, and the best is that you can add servers to the pool and re-balance volumes automatically behind the scenes. the same automatic re-balancing happens as well when you remove a server;
  • and the best of it, is that many of these operations are done online.
More information will come out explaining what is ScaleIO, how it works, and how to configure it for SAP HANA environments. EMC is working on a whitepaper documenting how to configure ScaleIO for SAP HANA environments which should not take long to be published.

Meanwhile, check out SAP Note 800326 for the official support statement from SAP for ScaleIO, and if you want to learn more about this technology, it's benefits and how it might fit on your datacenter strategy, drop me a message and I'll be happy to help.

One final note: if you are a fan of server based storage, you need to scale-out, you would like alternatives to GPFS, and would like to have more choice for this kind of setup either with IBM servers, or other server manufacturers, the best news of all, is that NOW you have!

Stay tuned as more new will come on this soon.

SAP HANA Network Requirements Whitepaper has been published



Just wanted to bring your attention to the publishing of the “SAP HANA Network Requirements” whitepaper, that can be found at: http://www.saphana.com/docs/DOC-4805

I’ve talked with some of you in the past in regards to:

  • What is the needed throughput for certain LAN segments;
  • What is the maximum latency admitted for synchronous replication;
  • What network segmentation must be implemented for HANA Network Integration;
  • Etc.


All of these and other questions are answered in this whitepaper.


For me, some aspects within the document caught my attention:

  • Recommended to have a maximum of 1 ms round-trip in the network connecting two sites when intending to implement synchronous replication;
  • Demand for the HANA Internal Network (for inter-node communication in a scale-out cluster) to deliver minimum of 2x9 GBit/s in full duplex;
  • Recommendation to have up to 9 network segments (with the implied demand for server network ports) for performance and security reasons…


Well, if this is a topic of your interest, be sure to reserve some quiet time to have a close look at this document as it is indeed very detailed.
I believe there are some aspects I would argue against or add something, but overall is a very good document for all that want to get started on these topics.

Also remember that I keep an exhaustive list of relevant SAP Technical Document to make it easier to you all to find what you are looking for.

Have a great reading!

2014-10-04

Does SAP HANA Require NAS storage? No it does NOT!

Although I've written quite a bit about this, there are still many confused about whether SAP HANA needs a NAS storage.

So, to make it clear, NO IT DOES NOT!

What SAP HANA requires is a shared file system in the same way the old SAP Netweaver system had a shared file system that was SAPMNT.

The same happens with HANA and his is called /hana/shared and must be accessible by all HANA nodes, in the same way it happened with SAPMNT in the Netweaver world

So, using a NAS storage / gateway is a way to achieve this goal, but is not the only one.

For example, in the same way it was done for the Netweaver's SAPMNT, you can use a Linux server with PaceMaker to export an NFS share out of a block device in an high available way.

The consideration you need to take here is the size of your cluster. So, if you are going for scale-out clusters with a large number of nodes, you don't want this NFS share to become a bottleneck and a problem, so ensuring proper network connectivity (latency, throughput and availability) requires more careful planning than just throwing it to any existing Linux machine.

I've written a lot on why you should use block devices for HANA.

As a short reference you can read about:

All this said, using a unified storage that provides both block and file connectivity may also serve your needs, by using the block connectivity for all the data and log devices, and the NAS functionality for the HANA SHARED.

Again, this is a possibility, not a need.

Finally, considering that most HANA projects I'm seeing these days are for Suite on HANA, which implies single server implementations, doing this in a TDI setup, by connecting two servers to external storage for high availability, and while installing Linux add as well the clustering software package just with the goal of protecting and exporting the "HANA shared" NFS share might be the simplest more integrated option. (you can read more about running SAP Business Suite on HANA in a TDI setup on my blog post here: http://sapinfrastructureintegration.blogspot.com/2014/09/running-sap-business-suite-on-hana-in.html).

Again, these are all possibilities, and fortunately, as HANA has matured a lot over the last year, now you have these well known options also available for HANA, further providing you choice in a way that you can standardize your datacenter practices.

As a conclusion, HANA is a lot more open today than it was just 1 year ago, so don't go for proprietary solutions that having been the first, are not the easiest neither the best for most customer cases. Take your time to evaluate current architectural options and make a decision to have the most standard possible application architecture and building blocks, across your datacenter, including HANA of course!

Having this uniform architecture across the datacenter will drive down your risk, enable more agile changes, and in the end a more streamline and cost effective operation.

Hope this helps, and feel free to shoot me any deeper technical questions you may have in this regards.

2014-09-18

Running SAP Business Suite on HANA in a TDI Infrastructure? Yes you can.

Lots of discussion has been going around in regards to the following questions:

1.       Can Business Suite on HANA (SoH) run in an infrastructure implemented as Tailored Datacenter Integration (TDI)?

2.       What are the storage sizing guidelines for Business Suite on HANA in a TDI configuration?

 

Today’s reality for SoH on TDI infrastructure

Let me be very direct and give a very short answer to these questions, and then explain why these questions are coming up, and what the source of confusion is.

1.       YES !!! Business Suite on HANA can run on a Tailored Datacenter Integration infrastructure. You can confirm this by reading "Question 5" of the "General Information" Chapter in the SAP HANA TDI FAQ available at: http://www.saphana.com/docs/DOC-3634  (note that this document is regularly updated, so make sure to check for the latest version online. My comments are based on the August 8th 2014 version). There is written: "SAP HANA Tailored Data Center Integration can be used for both scale-up (single-node) and scale-out scenarios. For scale-out scenarios, SAP HANA TDI is limited to 16 nodes."

2.       The storage KPIs that need to be fulfilled by TDI configurations, both in the case of BW and Suite on HANA are the same, as defined by SAP on note 1943937. So, the EMC storage best practices for SAP HANA TDI apply in the same way both for Suite on HANA and BW. Nevertheless, as Suite on HANA may involve higher space requirements, there is always the need to confirm whether the "minimum requirements" provided on the EMC Best Practices Whitepaper provide enough usable space for the node size being connected to the TDI storage. In case of need for additional space, disk count should be incremented.

 

Reasons for customers and consultants being confused

So, why are these questions coming up?

The explanation for these questions comes from the SAP FAQ available at http://www.saphana.com/docs/DOC-3634 and in particular what is written in "Question 11" of the "SAP HANA TAILORED DATA CENTER INTEGRATION WITH ENTERPRISE STORAGE " chapter.

There is written:

Q: Why are there different KPIs for SAP HANA appliances and for SAP HANA TDI with Enterprise Storage?

A: SAP considers appliances as the hardware offering providing highest quality and highest performance for SAP HANA systems. In SAP HANA appliances, the hardware components are solely used for the SAP HANA systems running on it; there is no sharing of hardware with other systems. Therefore, SAP’s requirements for SAP HANA appliances are higher than for SAP HANA TDI infrastructures...

 
This has led for many people to raise questions on whether Suite on HANA could be ran on a TDI setup, and also for lots of people to ask what are "our special recommendations to configure storage for Suite on HANA KPIs".


As you can clearly understand, there are in fact more demanding KPIs for Suite on HANA dedicated appliances, and some customers start to think: “I'm buying HANA to speed up my processes, and then going on TDI makes it slower”.


I have to say that some consultants have also contributed to build confusion on this topic.

 

                Running SoH in TDI does not mean running slower!!!

This reasoning is not completely true, because the fact that SAP's requirements for TDI are lower than for appliances does not mean that special TDI "shared" configurations of enterprise storage systems cannot meet them.

In fact EMC, together with Cisco, has just certified a 3TB Suite on HANA special configuration using external storage. This certification submitted the system to the most demanding “SAP Business Suite on HANA” storage KPIs, and can be found at: http://scn.sap.com/docs/DOC-52522#jive_content_id_SAP_HANA_SoH_optimized_hardware_SLES_configurations

This proves that external storage can fulfill the most demanding SAP HANA Appliance KPIs, and that EMC has both the technology and knowledge to do it.

 

                Again, it’s all about the TCO.

But, from reading the full SAP HANA TDI Q&A SAP has published at http://www.saphana.com/docs/DOC-3634 in "Question 6" of the "SAP HANA TAILORED DATA CENTER INTEGRATION WITH ENTERPRISE STORAGE " chapter you can realize that depending on your specific application workload profile, it may happen that not meeting a single KPI may not have any impact on the observed performance of the system.

So, if you want to run SAP Business Suite on HANA on a shared infrastructure ensuring that the system meets the most demanding KPIs, stay tuned as EMC’s recommendations will be coming soon in the future!

EMC engineering is continuously working on improving their recommendations for SAP HANA storage integration, and I’m sure more openness and simplification will come in the future alongside with more options for customers, even those requiring the most demanding performance numbers in a shared infrastructure.

Be aware though, that it may happen that current TDI recommendations are already delivering most of these more demanding KPIs, and that this solution should be enough for the majority of the situations.

It’s important to realize this, because often we forget that improving a few micro-seconds in latency, or a few MB in throughput may imply a larger HW investment, as resources that otherwise would be shared, to guarantee such demanding requirements will need to be dedicated.

So, think well on how these small improvements are important to you before making the decision to go for “the best of the best of the best”, because that has a cost (being on CAPEX or on OPEX, as if you save in one, will most likely be more hit on the other). Meaning, maybe for your specific business case having just 100 times improvement instead of 110 times improvement is good enough, and at a much lower cost! Just think about it.

 

                The facts as of September 2014 (yes, all things HANA are evolving very fast!)

In summary:

·         YES, suite on HANA can run with TDI infrastructures

·         Today, SAP HANA Storage KPIs for TDI do not take into consideration what is the application (BW or Suite) neither the node size. So, our recommendations ensure that those KPIs are always met.

·         The fact that SAP's required KPIs for TDI are less demanding than the appliance KPIs for Suite on HANA, doesn't mean that EMC's configurations for enterprise storage cannot fulfill many of them.

o   In fact, in some real world examples I saw, when measuring the KPIs in a VBlock 700 configured under the “current standard TDI guidelines”, we achieved more than 90% of the most demanding Suite on HANA KPIs, and on the ones failed we were very close. Meaning, very close to maximum performance KPIs asked for SAP HANA SoH optimized appliances!

·         If you want maximum performance on a shared storage infrastructure, for example to take advantage of HANA HA, or disk based system copy, backup or DR, for a started EMC just got the first evidence of our capabilities published. Find the Cisco 3TB SoH optimized appliance with VNX5400 external storage at: http://scn.sap.com/docs/DOC-52522#jive_content_id_SAP_HANA_SoH_optimized_hardware_SLES_configurations

o   But remember, the things in the certification matrix are just a starting evidence of what can be done. Do not hesitate to ask more questions and find YOUR desired SoH config.

o   Out of the learnings from this certification, EMC is still working on providing concrete guidelines on how to configure external shared storage in a way that the appliance KPIs are met. So, despite the facts today about SAP HANA TDI KPIs being the same for SoH and BW, as this is a question coming around quite often, for those customers that want the best performance in a shared infrastructure, EMC will be providing some further info in the future. Stay tuned.

·         If you want to get a better understanding of the impact of the various HANA storage KPIs and the potential performance impact of not meeting some of them, I advise the reading of:

o   "Question 6" of the "SAP HANA TAILORED DATA CENTER INTEGRATION WITH ENTERPRISE STORAGE " chapter in the SAP HANA TDI FAQ, that you can find here: http://www.saphana.com/docs/DOC-3634  

o   Read the SAP HANA Storage requirements whitepaper, where SAP explains the IO profile of HANA, and what each IO KPI means so that you can evaluate its potencial impact on the performance of a specific SAP HANA System. This whitepaper also covers storage sizing. You can find it here: http://www.saphana.com/docs/DOC-4071  

If at the end of this discussion you still have questions on whether running Business Suite on HANA with a Shared Storage TDI infrastructure is a good idea, make sure to read in detail the "IT Management Fact Book for Business Suite on HANA", that you can find here: http://www.saphana.com/docs/DOC-4223  

For me, this document makes it clear that TDI is the future for HANA Datacenter Integration, simplifying HANA operation's practices.

 

Final thoughts

If you watched Bill McDermott’s (SAP’s CEO) keynote at this year’s SAPPHIRE, for sure you remember the title: Simplify Everything, Do Anything.

Well, it’s more than appropriate.

SAP is full steam on simplifying SAP HANA adoption for its customers. I’m interpreting that in the way it will mean also opening further the datacenter integration options of SAP HANA, and providing more choice to customers.

So, I stick to the conclusions I wrote on my blog on TDI from January this year, where I made an analysis of SAP HANA evolution.

My advice here when making a choice would be: if any provider or consultant comes to you saying that SAP HANA is very complex and very restricted in options, take a second guess. As either he is trying to corner you to his proprietary solutions or has no real clue on how much HANA has evolved in this last year.

SAP HANA is in fact becoming a normal application in the datacenter, with its unique workload pattern, of course. But its datacenter integration practices are everyday further aligning to what is normal for IT operations in most datacenters around the world.

And these are good news!

·         Good news for customers in the first place, as it will be simpler, faster and cheaper to implement SAP HANA (big example here is the support for HANA on VMware).

·         And good news as well for SAP, and this will enable an increased pace of HANA adoption.

2014-07-28

SAP HANA Fiber Channel Storage Connector - Where and How?

You've done the SAP HANA certification with SP7 contents, you learned that HANA can natively connect to block based storage, and that the HANA NameServer can manage the mount and unmounts of the storage volumes, as well as the unattended failover of HANA nodes through the SCSI-3 Persistency Group Reservations.

But you don't know how to get the storage connector, neither how to configure it.



          Where can I find the HANA storage connector and its scripts?

Well, the storage connector comes with the SAP HANA Kernel as of SP5, so you don't need any additional package. Also the scripts that are called by the name server also come with the SAP HANA Kernel.


          How does the configuration of the HANA storage connector for FC works?

The administration guide for the "SAP HANA Fiber Channel Storage Connector" can be found in attach to the SAP Note: 1900823

Download it here: https://service.sap.com/sap/support/notes1900823


          Why would you want to use block access to the HANA persistency devices?

There are two main reasons that relate with latency and reliability.

In regards to latency, it benefits the overall HANA performance that the write latency of the log blocks of 4K and 16K is the best possible. And FC access to block storage provides the best latency in regards to SAN based storage. Here I've seen normal arrays providing latencies usually bellow 500 micro seconds, with some even going bellow 400 micro seconds.

As for reliability, as many SAP notes can confirm, when you share the disk volumes of HANA in a Scale-Out cluster on a FileShare, it's challenging to implement proper fencing mechanisms that prevent having a failed node and the standby node both writing to the same volume, which would end in data corruption.

SCSI-3 Persistency Group Reservation is the mechanisms mission critical clustering software has been using to protect database clusters for years. The beauty here is that instead of needing any type of clustering software, in the case of HANA, the Master NameServer takes care of handling the failover of the failed nodes to available standby servers through the usage of this block API (this is the popular name for SAP HANA Fiber Channel Storage Connector).

So for High Availability purposes, there is no need for any additional software. HANA kernel already brings this functionality natively.

This solution provides a robust failover clustering implementation leveraging a well proven mechanism providing the most efficient communication protocol that is Block over Fiber Channel.


------------------------//-----------------------


DID YOU KNOW: that the python scripts that SAP uses for SAP HANA connection to block storage were initially developed together with EMC engineers?

EMC Engineers worked on this project together with SAP, and as of SP5 SAP included their code in the standard SAP HANA kernel release, being now SAP the owner of that code and the one responsible for its support and evolution.

Today this connector (Block over Fiber Channel) is available to all other storage manufacturers, and is the one most used for SAP HANA Tailored Datacenter Integration certification of Enterprise Storage, having Block over FC become the standard in SAP HANA storage connection.

This is one of the reasons why EMC was one of the first hardware partners to get a Enterprise Storage System certified under the TDI Program.



Check out the current list of certified storage systems for SAP HANA TDI at: http://scn.sap.com/docs/DOC-48516

Find out more about EMC Solutions for SAP HANA at: https://community.emc.com/docs/DOC-17151

2014-07-25

Large ERP Systems on HANA? Balancing performance and availability.

Just came from a meeting with a large customer.

The topic: migrating to HANA!

Found that SAP is working with this customer to migrate a 70 TB ERP and a 90 TB BW both to HANA.
Will it work? Of course it will!

So here are the unanswered questions:
  • But isn't Suite on HANA only available on Single Node?
  • What about datacenter practices for aggressive uptime and data resilience demands? What kind of compromises or balances will have to be made between data resiliency and system performance?
Let me give you my perspective on this.

There is a restricted customer program to address the needs of Business Suite customers migrating to HANA where the target database will not fit today's largest servers which make available 6 TB of RAM.

This program presents lots of conditions and restrictions, and I believe will evolve a lot over the coming years.

Some of the details can be found on the following SAP Notes:

          Suite on HANA Scale-Out

So, from these notes you can see that scale-out support for Business Suite on HANA is coming "sloooowly".
Sometimes we see news and we wonder whether this is really happening. Well I've just met two customers on this journey, one in Europe and another one in Asia. So, definitely there are customers today moving very large mission critical ERP systems to HANA.

By curiosity, the cases I saw were customers exchanging DB2 for HANA. Some of that SAP on Power/AIX/DB2 going away to HANA on x86... interesting aspect to think about...

This means that if your sizing points out to a system larger than 6 TB, escalate this to your SAP account team and get your company approved for this restricted program of support for Suite on HANA Scale-Out.

There are some considerations in regards to suite on HANA configs:
  • Both on BW and Suite on HANA, as still with HANA SP8 row tables are all loaded on the primary node, the size of the row store will be critical in two aspects:
    • the startup time of HANA in the case of a failure, as the row store is fully loaded before user connections are accepted to HANA;
    • the size of the server, so even with servers with 6 TB of RAM, as only about half of it can be used for persistent storage, we are talking about a maximum of between 2 and 3 TB of row store, which means for lots of customers I know will imply to implement some house keeping practices they never had over the last 10 or 20 years...
  • There are two operations that will imply cross node communication, and that will become critical in terms of performance:
    • Cross node joins, so trying to group as better as possible in the same node tables that are most often joined will be critical, as well as it will be critical for system admins to monitor this and evaluate the need for table redistribution in case it starts to become a problem;
    • Multi-step commits for those cases where a commit implies writing to objects on multiple nodes, which will imply that each node needs to make its own commit and then communicate it so that the commit is only given as done once all the nodes have committed their part (the problem of working with scale-out shared nothing architectures). So minimizing these occurrences will be critical for write performance.
So, two recommendations become very clear:
  1. Be very tough from the beginning with limiting the amount of row tables on custom developments as well as ensuring aggressive housekeeping on them;
  2. Build your cluster from the start with a very "powerful and performing" cross HANA nodes network connection, as this will easily become the weakest point in the performance chain.


          Can I run Suite on HANA Scale-out with nodes of less than 6 TB?

When I talk about suite on HANA scale-out, there are also lots of cases where customers would like to do it with smaller boxes, mainly for a cost reason, as 2 x 4 socket servers are usually a lot cheaper than a single 8 socket server.

If you read the SAP notes I've mentioned above, you can easily find out that SAP only considers a customer for the Suite on HANA Scale-out program if their sizing is larger than the largest box currently available. And lots of customers do not like to hear that.

But this is the current status.

I believe the future will make this evolve in one of two ways:
  1. Either SAP will with time and experience open up further the suite on Hana scale-out program, which will for sure imply greater requirements in terms of network connection between the nodes;
  2. Or all server vendors in the market will invest further in 8 sockets servers, driving down their cost in the market, making 8 socket x86 the new standard (which will depend a lot on Intel investing further in chipsets and other components targeted to 8 sockets machines).
Which is the most likely? There are lots of variables influencing, and it can go both ways.
Nevertheless, today option 2 seems the most likely.

But what restrictions do I expect to come out from SAP with further openness for Suite on HANA Scale-Out?
  • Higher requirements in terms of the network interconnect between the HANA nodes, so like we have seen on Oracle's Hexadata, it would not surprise me for SAP to demand 40 GB connection between the HANA nodes as the internode communication will be very critical (maybe Infiniband might become a requirement);
  • The other side that might represent a bottleneck, would be the log writes for 4K and 16K blocks, for which I would expect as well tougher requirements on these in terms of write latency and throughput. I believe technologies like EMC's recently acquired DSSD might become the solution for this challenge.


In summary, if your going to spend money on Infiniband switches, wouldn't this make a 2 x 4 socket config more expensive than 1 x 8 socket config? Let me leave the question like this for us to think about it...


          The balancing act between performance and data resilience

But there is an aspect I do not see often discussed, and which is very important: the laws of physics!

Remember that, even with fiber optics, information always travels at the most at the speed of light. So if you have high resiliency requirements, for example, by having data replicated to a very distant location, it will still take time for data to reach that location.

And although global communications are improving in terms of bandwidth, while lowering costs, having large network connections over large distances still costs a lot of money.

So, with HANA we are talking about the system writing in the grades of 1.000 MB / second / per node. This means that if we have 10 nodes, we are talking about 10.000 MB / second being transmitted over a very large distance.

Why do I bring this point?

Because I've seen some very passionate discussions on how to make SAP HANA the best performing as it can be on one room, and on the other I saw people taking care a service level agreements for RPO, RTO and data resilience. I happened to enter both rooms, and it was like to different realities!

One will impact the other! If you want very high performance, you'll need to go to asynchronous methods to get your data to a second location.

And depending on variables like:
  • affordable cost of communications;
  • change rate of your system > volume of changed data to be transmitted to the secondary site;
      It may happen that your data loss (RPO) may climb above your requirements.

Or if you want not to lose any data, and have synchronous replication, then either you are very limited on the distance you can replicate to, or will observe a major impact on the write performance of your HANA cluster.

My point here is that, when we reach certain volumes of data and change rates, the discussion is no longer a discussion of technologies, but of limitations of physics.

So, my advice here would be to: adopt the technologies you know better, and then measure the impact this will have in terms of application availability and discuss that with your business.


          Conclusion

I've written a blog on why in certain circumstances hardware based solutions can be better for HANA implementations. My arguments there are not that hardware based solutions are always better, but that there are multiple circumstances where they may prove to be a better fit.

When we come to discussions about very large systems, we enter the world of disaster avoidance, and not disaster recovery, since the data gravity (volume of data in a single location) simply starts to impose limitations.

Today you can run very large systems on HANA, but what I've seen is that many times, teams involved in these projects put beforehand the limitation in front of the customer that he can only use certain software based solutions, and through those passionate discussions, forget the limits of physics and the cost implications of those options.

The comment I wanted to make then was: "Sorry guys, but you still have a lot to learn about IT operations...".

But instead I've tried (as I always do), to get these guys to a new level of understanding that at certain points there is no single better technology option.

There will be multiple ways to solve a certain customer challenge, and that they should focus on addressing the business challenges of the customer through all those great benefits HANA brings to the table, and trust more on their partners who have been working on operations for years to then ensure the technical solution at the datacenter level meets the requirements.

Here I have to say, that there are still lots of missing pieces:
  • Still no TDI allowed for Suite-on-HANA scale-out, which I believe is an urgent need to meet these very large customer scenarios;
  • SAP is not involving their technology partners properly and early enough in the process, providing an incomplete perspective on the possibilities to their customers.

I believe these are the natural evidences of a maturity journey being walked, and hopefully in some time, all these aspects or concerns have become clearer for the key SAP and customer stakeholders and we'll have better and more robust solutions to achieve a better balance between data resilience and performance, leveraging both HANA's software features as well as the capabilities SAP's technology partners can bring to the table.

2014-06-04

The future for SAP HANA Productive on VMware is now! #LowerTCO

I'm watching live the SAPPHIRE keynote speech from Bernd Leukert, and on one side, VMware's CEO Pat Gelsinger comes live through satellite, and at the same time, one tweet led me to an SAP Press Release...

Wow!!!!

Awesome news!

Let me list them all:
  1. SAP HANA on VMware, in a single productive VM per physical host is now Generally Available;
  2. SAP HANA on VMware with multiple productive VMs per physical host is now released as Controlled Availability;
  3. VMware plans to release early in 2015 a new version of vShpere supporting up to 4 TB of RAM in a single VM, which will enable productive HANA systems to leverage that! Just a few months away.
  4. VMware is also closely working with SAP to support SAP HANA scale-out in production across multiple VMs.
So, I have to say, to all those system integrators and customers who were waiting for this... if you haven't started yet, the time to start with SAP HANA on your organization is now!!!

Keep a close eye on the following SAP notes:


Check out all the great add-ons EMC can bring to the table either through the SAP HANA Tailored Datacenter Integration program, or by leveraging the most advanced converged infrastructures from VCE. Read more about this on my blog post here: http://sapinfrastructureintegration.blogspot.com/2014/06/negative-roi-when-considering-sap-hana.html

2014-06-03

How Infrastructure can help reducing the cost of operations of SAP HANASystems


It has happened more than once seeing persons involved in SAP HANA opportunties, that without realizing, built unfair doubts and concerns in their customer’s minds regarding whether adopting SAP HANA is really the best option for a specific business challenge, and whether it is mature enough to support mission critical applications.

The result was a miss perception of the achievable values in terms of both CAPEX and specially OPEX when implementing and running a SAP HANA landscape.

This has led in some cases for SAP HANA adoption projects to be halted, postponed or even canceled.

Among the different situations I’ve observed, one aspect has been repeating itself: relates with the Data Center integration of SAP HANA, and its operations practices.

I would argue that it has happened for 2 reasons:

1.      those persons talking with the customer have no clue what it is like to manage and operate a datacenter supporting aggressive Service Level Agreements in support to Business Critical applications, neither the costs involved in that, neither of all the options currently available to SAP HANA;

2.      customers managing business critical applications in support of aggressive SLAs know that changing people and processes is always more costly than implementing a new technology, and so they always look beyond the high level characteristics of that new technology, down to its implications on operations cost and risk down the road. And if they find no trustworthy options may decide for a postponement or "additional research" before making a commitment.

In the last couple of days, EMC announced the availability of certain functionalities for use with SAP HANA, that I believe will revolutionize the way many customers have been facing SAP HANA deployment options until today, and that will simplify the Datacenter operations of SAP HANA systems, contributing to lower operations cost, less risk, and better lifecycle management options.

This announcement further confirms the high level of maturity that SAP HANA software has reached since its announcement in 2010, and enhances the choice available to customers looking into HANA as a platform for their mission critical applications.


I’m not sure whether most people involved with SAP HANA opportunities around the world are fully aware on how this announcement could help overcome the concerns many SAP customers have been expressing, as well as how much it could impact the costs associated with change, risk and operations management of SAP HANA systems in a mission critical datacenter environment.

So, through this blog post I’ll share what I’ve learned from the customers I’ve been meeting in regards to their decision process, and will explain why this EMC announcement is so ground breaking, trying also to explain the relevance of the technical solutions announced in terms of their impact on datacenter operations, tying into the potential impacts in achieving overall lower OPEX out of running SAP HANA system landscapes..

I will also further explain why I believe the SAP HANA appliance model will become obsolete, by exposing the case of Converged Infrastructures as a factory built infrastructure stack that can be shared both among HANA and non-HANA workloads, contributing also to drive down HANA projects CAPEX.

2015-05-01 Update: SAP has come to agree with all my arguments here, and the evidence of that can be found on the "New HANA Economics" presentation, that I would strongly suggest you to go through. Having this presentation been build to support the arguments that running SAP applications on HANA is more cost effective that running them on Oracle, the same reasoning also supports all I'm arguing on this blog post.

Setting the scene

As more customers adopt SAP HANA systems, its use scenarios also expand.

While many customers realize the potential benefits of integrating this technology into their applications portfolio, many still struggle either to achieve a positive ROI business case for its adoption, or get concerned with the limited datacenter integration options available, which most of them know will be a key factor driving up IT Operations, Risk and Change management costs.

In this perspective, some even say that the variety of options in terms of datacenter integration of an application, are a good sign of its maturity. This leads to the consideration that SAP HANA still lacks the maturity as a product to reach the prime time of Business Critical applications in many of customer’s datacenters.

I heard a wise CIO once saying, that for his mission critical applications we wanted “the almost latest technology” as testing new things with its core business, is a risk that has cost the job to many.

Having persons involved in SAP HANA project opportunities pushing for the “pitch” they were taught and presenting unreal and very limited options in regards to datacenter operations, without fully understanding the variables customers consider when running Proofs of Concept, may sometime do more harm than good in helping customers make their decisions.

Figure 1: SAP HANA Tailored Datacenter Integration for Enterprise Storage enables sharing the same storage infrastructure for SAP HANA and non-SAP HANA applications.


I’ve once written that I believe the SAP HANA Tailored Datacenter Integration program would make the appliance model obsolete, not only by driving further choice and openness to customers, but also by making easier for SAP partners to bring their A game to the table.

Recent announcements further confirm this, by further expanding SAP HANA's Datacenter integration options, and through that,  providing more choice to customers, further showcasing its current high maturity levels.

EMC has just announced the release for productive use of the replication (synchronous, asynchronous, concurrent and cascade), cloning, snapshot and consistent split technologies embedded in their VMAX arrays, for usage in production with SAP HANA Systems both in physical as well as virtualized configurations.

These technologies will enable major cost savings and “de-risking” of SAP HANA implementations, and SAP resources involved in “ HANA DataCenter related” discussions should at least be aware of these things.

So, let me try and shed some light over a different perspective in regards to the "why" customers may value these new possibilities, in order to help you expand your understanding of the currently available possibilities to integrate SAP HANA into existing datacenter practices.


            Why customers consider implementing SAP HANA

Today, with globalization, companies need to be faster and more precise than ever in their business decisions.

This is taking many organizations to decide for the adoption of SAP HANA, the new In-memory Database from SAP, as a way to accelerate the way the company is operating.

With the increased adoption of SAP HANA by customers globally, the variety of use cases also expands, leading to increased situations where SAP HANA will become the primary persistency of Business Critical Applications.

Business Critical applications, due to its relevance to company’s business, usually imply more demanding technical requirements, which lead to increased overall costs associated with its implementation and operation.

Those technical requirements are usually related with aspects like:

·         Availability;

·         Recoverability;

·         Data Protection;

·         Performance.

Nevertheless, companies don’t buy technology just because it is the fastest, or the best in any particular characteristic in the world.



Figure 2: Example of balancing factors in customer’s decision process related with new technologies adoption.

The best managed organizations (and the ones more subject to fierce competition) make decisions on a balance between the expected Business Benefits and the Costs associated with those technologies, evaluating carefully what is the reasonable Return on Investment to be achieved.

And what I’ve observed is that many organizations evaluating SAP HANA adoption projects, considering only some of the limited options available in the past, has made it impossible for many of them to solve this equation, in a way that they reach a comfortable positive ROI result.


            Questions that delay SAP HANA adoption

This happens as most organizations undertake a decision process that involves asking (and getting facts supported answers) to a number of questions other than just simply finding whether SAP HANA is the fastest thing they have ever seen. Examples are:

·         What Technical Requirement must be meet to support the intended Business use of SAP HANA?

·         What will be the investment needed to implement SAP HANA?

·         What will be the operational implications in terms of cost and risk of meeting the Technical Requirements based on the various Implementation Options?


Figure 3: examples of aspects evaluated by organizations in the process of deciding for SAP HANA adoption

After evaluating all these needs, customers must reach a positive ROI scenario to move forward with their purchasing decisions.

Usually, setting up Proof of Concepts have the goal to confirm that HANA has the capability to speed up the Business Processes to the expected levels, but also the goal to gather knowledge that enables responding in an “evidence sustained way” to questions like the ones on the figure above.

I’ve heard from our customers that in many cases where negative ROI is reached and the purchasing decision is halted, the key factors contributing to drive up the cost are associated with risk, change and operations management.

In fact, in most customer scenarios the cost of operating a certain business application on a 5 year period is far greater than the investment required to implement that application, and we see this same feedback from customers when discussing SAP HANA adoption projects.

When evaluating more than just the CAPEX part of a SAP HANA Adoption project, unfortunately I’ve found quite some customers where the calculated Total Cost of Ownership (TCO) of a SAP HANA solution (the OPEX part of the equation) simply led to a negative ROI business case, having as a consequence the mandate to find other alternatives that are able to provide that positive ROI business case, or simply halting, canceling or postponing the SAP HANA related projects under evaluation.

This is quite unfortunate as today's reality provide far greater options and openness that it was possible 1 year ago, and many times that information is not made available to customers.


            Customer experiences in Driving Down cost of operations

A key factor driving down costs associated with risk, change and operations management is to have known and consistent practices & technology solutions across the technology stacks used to support those business applications.





Figure 4: Key factors driving cost associated with SAP HANA risk, change and operations management

In fact, is today an accepted principle that standardizing the technology and processes across the datacenter, represents a huge cost saving factor.

Why? For many reasons! But let me highlight some.

Having standard technologies across the datacenter:

·         drives less cost on spares;

·         enables pooling of similar resources driving higher utilization levels, and through that lower costs;

·         simplifies automation, as the automation tools and procedures can be used across more similar components;

·         enables economies of scale;

·         drives less dispersion of skillsets and through that more expertise, which has as consequence the ability to manage risk better, as well as dedicate more resources to service improvement and innovation;

·         Enables higher automation levels, which release IT personnel to further engage in supporting strategic business initiatives.

In summary, reducing the IT Datacenter Portfolio dispersion drives simpler change processes, lower risk of operations and overall less cost for the whole Datacenter operations.

To make me look smarter, let me put this in an equation:

ROI = Business Benefits – TCO

TCO = CAPEX + OPEX

Considering that OPEX for a solutions on a 5 years period can easily be 4 times the CAPEX for that same solution, from these very simple formulas, you can easily understand that any reduction in OPEX will have a significant impact in the expected ROI of the project.


            Further explaining the case for cross Datacenter standardization

The fact that operation costs account for the largest chunk of costs in the Total Cost of Ownership of a Business Application, gets further relevance if we remember that all organizations have multiple applications to fulfill their business needs, which is a contradicting force to the standardization case I’ve just made.

Application choice is in fact driven by its capacity to provide the business functionality required, which often makes companies disperse their application portfolio across multiple vendors.

So, as application functionality drives dispersion, and in consequence drives cost up, companies look more and more to reap their needed IT budget savings out of the infrastructure stack.

Remembering again that no single customer has only SAP applications in their datacenter, makes further the case to ensure that the datacenter and infrastructure integration practices are the most common possible across all applications.

This principle is one of the key driving forces behind today’s large consensus behind the movement to “Cloud Computing”, being in its Public, Private or Hybrid flavors.



Figure 5: the key steps towards implementing a Cloud Enabled Infrastructure

(NOTE: I know that Virtualization also plays a key part in all this reasoning, but I’m leaving it out on purpose, as the sole benefits of virtualization are so huge, that I thought it would deserve a blog post just for itself.)

In the past having all the standard datacenter architectures and processes shared between HANA and non-HANA workloads was not possible. But has HANA matures, new options have become available, making this a possible option today.


            Diving into the relevance of EMC’s announcements

EMC has been known for years by its mission critical capabilities embedded in the Symmetrix VMAX arrays. In fact, according to some analysts, EMC accounts for the largest installed base of SAP systems running on top of their arrays.

This means that most of SAP’s customers will already have EMC functionality to protect and manage their existing mission critical SAP systems.

So, being able to manage the same datacenter technologies and procedures, will represent for many of these customers a very low startup cost to add “just another application” on top of their existing setup.

In fact, in many customers, the combination of the multiple technologies of the array provide “out of the box” High Availability, Disaster Recovery, Data Protection, LifeCycle Management and Automation capabilities to all applications that sit on top of it.

For many customers, not being able to leverage these capabilities, and being forced by the past SAP HANA deployment option that limited HANA to be available through a dedicated appliance, meant to build a new silo in the datacenter, with a complete new set of Datacenter Components, its unique set of processes and management tools, represented such a startup cost and risk, that just killed the business case for SAP HANA adoption.

For all these customers, the good news are that the SAP HANA Tailored Datacenter Integration (TDI)program has been Generally Available since November 6th, 2013 providing more options in terms of SAP HANA deployment options, and removing the restriction of having HANA only available through a dedicated hardware appliance.

As part of the SAP HANA TDI program, EMC’s has tested and proven the EMC VMAX technology for usage with SAP HANA, and has released for productive use the following functionalities of the VMAX arrays:

·         EMC SRDF S/A/STAR

·         EMC TimeFinder Clones / Snapshots

·         EMC Enginuity Consistency Assist

This enables organizations to leverage those technologies for all their Business Critical applications (including SAP HANA) in a standard and consistent way, and through their use, to drive up SLA’s while driving down operations costs and risks, which will contribute to the reaching of better ROI in SAP HANA adoption business cases.



Figure 6: Key benefits for SAP HANA customers from the adoption of the EMC VMAX storage for their HANA solutions

EMC has a set of built in capabilities in the EMC Symmetrix VMAX storage system, that enables organizations to setup an architecture for SAP HANA, that provides “near-zero data loss” and “near-zero data unavailability”.

With SAP announcing the support to run SAP HANA in Production virtualized with VMware, the benefits of running a virtual datacenter can now also be leveraged by SAP HANA.

The characteristics of the EMC VMAX presented above, represent a solid foundation not only to support SAP HANA setups in a shared physical infrastructure, but also for the setup of a virtual Datacenter, one that through the combination of the EMC VMAX and VMware vSphere capabilities, provide out of the box a set of availability, flexibility and lifecycle management features that can be leveraged between SAP HANA and other SAP or 3rd party applications at a lower cost than silo based architectures like the old SAP HANA appliance model represented (but let’s leave the virtualization part for another blog post).


            Explaining the value of all these EMC News for SAP HANA

For those of you not familiar with the EMC technology naming, let me take some paragraphs now to map the relevance of those technologies to SAP HANA operations challenges.



Figure 7: Mapping of EMC VMAX Functionalities to SAP HANA Data Center Integration needs, driving up the SAP HANA Platform Resilience and Manageability

The functionalities of the EMC VMAX enable the expansion of SAP HANA datacenter readiness, by :

·         Simplifying High Availability:

o   The EMC VMAX has building characteristics that provide up to 6 9s availability, like the replacement of most parts without downtime;

o   The usage of the SAP HANA Storage Connector for Fiber Channel (aka SAP Block API), enables transparent and non-disruptive HANA node failover in case of a single server component failure;

·         Consistent practices for DR for SAP HANA and all other applications:

o   SRDF/S enables synchronous replication across storage volumes, ensuring an RPO of the last committed transaction, as even the log files are replicated (enabling as well a point in time recovery on the secondary site, if needed);

o   SRDF/A enables asynchronous replication of the SAP HANA Data and Log volumes over large distances, enabling SAP HANA to have the SAP DR practice as all your other applications;

o   SRDF/Star enables multiple scenarios of cascading and concurrent replication to comply with the most demanding data protection requirements combining simultaneous synchronous and asynchronous modes to multiple different destinations.

·         Improved Backup Capabilities

o   Through EMC TimeFinder it’s possible to build consistent disk based clones of SAP HANA Systems;

o   Through the integration with SAP HANA Snapshot functionality enable to do point in time recoveries of SAP HANA using the HANA Studio, using as a starting point a storage snapshot (available as of SAP HANA 1.0 SPS 07);

o   Deliver restartable copies of SAP HANA through HANA’s snapshot capabilities (available as of SAP HANA 1.0 SPS 06);

o   Backup productive systems through a snapshot taken at a remote replicated site through the integration of EMC’s TimeFinder with EMC’s SRDF and SAP HANA Snapshot based backup functionality;

·         Simplified Quality Assurance System Refresh

o   Use storage snapshots to build a restartable image of SAP HANA systems without need to stop production, and be able to convert those snapshots to full clones if needed;

o   Be able to perform the operation referred on the previous point either on the productive site, or on one of the sites to which the system is replicating using EMC’s SRDF;

o   Automate the storage clone and the mounting of the new cloned volumes through the usage of EMC’s Replication Manager software;

·         Simplified test scenarios in virtualized environments

o   Through EMC’s VMAX integration with VMware vShpere’s VAAI, take storage based snapshots or clones either locally or remotely, through the use of a combination of EMC’s TimeFinder Clone/Snap with EMC’s SRDF, to enable consultants to build very fast a copy of production for error regression debugging;

·         Simplified test scenarios in physical environments

o   Take clones of productive systems and easily build clones of clones, or convert snapshot to clones either locally to the production datacenter or remotely on a replicated site for regression testing of errors in production, through the integration of SAP HANA’s Snapshot functionalities with EMC’s TimeFinder, EMC’s SRDF, using EMC’s Replication Manager to automate the pre and post-split steps to very quickly the new test HANA system;

·         Secure DR Testing

o   Through the integration of EMC’s SRDF and EMC’s TimeFinder technology, you can build a full clone of your production environment, and mount it on servers for DR testing, without interrupting replication of your productive environment, and through that enabling DR testing without risking production protection;

·         Cross application consistent backup

o   In the cases where multiple applications are interdependent, and the loss of data in one fatally affects the others (which is the case for example of the use of SAP HANA as a sidecar to a SAP ERP System with the usage of SAP SLT for trigger based replication, where a loss of data in the ERP implies the full reset of STL and full reload of HANA leading to multiple hours of recovery), the integration of EMC’s TimeFinder Clones or Snaps with the Eginuity Consistensy Assist enables to federate the disk volumes of multiple systems for backup purposes, and to recover them to the exact same point in time consistently, avoiding the need for the HANA System reload;

·         Cross application consistent Quality Assurance landscape refresh

o   Using the same technologies referred in the previous point, enable the consistent build or refresh of a quality assurance environment, ensuring that a federated environment is refreshed at the exact same point in time (usefull for example in the case of a refresh a a Q&A environment composed by connected ERP, SRM and BW systems, eliminating the need for functional teams to align the systems data after the technical refresh, and trough that speeding and simplifying test plans).


A brief look just at replication as one example

To further illustrate the benefits above, let me look at the example of setting up a disaster recovery architecture and associated procedures.

I have to say that when I started my career I was a huge fan of database software replication as a way to build disaster recovery configurations.

It was so cool! I had to script a lot of stuff, and you just needed a dumb on the other side.

So, back then, I couldn’t understand why anyone in their perfect mind would choose to do hardware based replication using the storage systems replication capabilities.

I came to understand it the “very hard way” some years later, as I came to manage the operations team of a customer (had all the system administrators and operators reporting to me with application availability SLAs with HUGE penalties for any downtime that exceeded 1 hour).

Having an "explosion" reported on the primary datacenter, I had to activate the disaster plan, and get the team working on the Disaster Recovery Systems activation.

Through that very long Saturday night that I spent “awake” managing firefighters, policemen, the customer and my team, I got a very extensive understanding of Murphy’s Law:

·         As DR activation involved scripting, I came to find out that although all our governance procedures, there were changes made in the production system, that weren’t applied in the same way on the DR site, which represented an impediment to get some critical systems up and running;

·         I also came to understand that sometimes when activating a DR plan you don’t have your best resources available;

·         It may also happen that your most skilled resources do not react well under the pressure of a real disaster situation;

·         As the DR was based on database level replication mechanisms, there were some critical files at the filesystem level needed to get some business critical processes running that weren’t there…

Well, I have to say that this event changed my understanding of Disaster Recovery planning, and made me appreciate hardware based, comprehensive, automated and complete replication solutions.



Figure 8: analysis of what is, and is not, included on the SAP HANA System replication.

So, allow me to also apply this learning to SAP HANA.

Last week I was at a customer event, and not surprisingly, the majority of customers had very experienced people from operations there to meet me, which had been involved in operations for many years.

So, for these guys, this is so obvious, that you don’t even need to explain it. One of those cases was a manufacturing company that had a failure on their production system and was out of business for almost 2 days.

The customer mentioned that this event had as consequence:

·         Full stop of production as they managed all production planning and supply chain trough SAP;

·         Loss of Revenue for not having their product on the shelves of supermarkets;

·         Major costs of restarts of the production plant as a stop implies that product will be such on the pipes and raw material having a short live period had to be through into the garbage.

This customer commented that all the ones involved in the architecture and management of the IT components responsible for this failure are no longer with the company.

Sometimes I may look like an alien on the eyes of colleagues that having SAP HANA application knowledge, never were in front of datacenter operations for critical systems. These experiences leave you learnings that last for a lifetime, and these learnings are the ones that make it for solutions that were not the most obvious choice in certain eyes, to actually be adopted by many of your customers.

For this customer, being able to leverage the EMC technologies they have purchased after that failure is a key decision factor in the adoption of SAP HANA, and these news were very well received, having contributed to kick start preparations to start migrating existing applications from Oracle to SAP HANA.

It’s all about the TCO!!!

Still, at this stage, some might argue that having the SAP HANA System Replication make all these functionalities of storage arrays irrelevant in the HANA space.

Well, if it wasn’t clear yet, it's all about the TCO!!!

Licensing costs or even the total CAPEX of the solution only represents one part!

Change, Risk and Operations management may be more difficult to calculate at the starting point of a project, but an experienced customer will never leave them out of their TCO calculations.

And that is why it is so important to make customers aware of what possibilities are out there.

So, other than doing just a simple CAPEX costs analysis, I’ve tried to make an exercise that exposes other variables that may be relevant to a lot of other customers.



Figure 9: Example of a possible alternative analysis of SAP HANA DR Options.

Through the table above, I do not have the presumption of being right on all my classifications.

The point is to showcase that there are other variables that are relevant for many customers which imply consequences on the TCO equation, and so should be considered whenever they are relevant for a specific customer SAP HANA implementation scenario. For example:

·         Being able to replicate all the data inside a system, and not only the data inside the database, may provide increased reliability in DR activation;

·         The possibility to use the same replication technology for all applications in the datacenter, may simplify DR activation and get the business processes running faster, as most business processes do not work with only 1 application online;

·         The possibility to ensure a consistent backup and restore across different applications, may be key to minimize the impact of logical errors (I have more than one customer scenario where this is a must to ensure consistent recovery of ERP, SLT and HANA when in a sidecar scenario, and avoid the need for a full HANA reload in case of any failure);

·         For many customers having a Recovery Time Objective of a couple of minutes is an easy trade off against having all the other aspects I’m referring on the previous bullets;

·         Some customers will not consider a technology for business critical applications that does not enable the testing of the Disaster Recovery plan without losing protection during that test;

·         For some customers not having the need to have active standby servers, may represent a significant cost avoidance not only in terms of CAPEX, but also in terms of OPEX as there are less OS images to manage, patch, etc.

These are just few examples among all I've gathered on the multiple customer meeting I've had over the last 12 months. But I believe they are enough to illustrate that there isn't a single version of what is the "true" in terms of SAP HANA Datacenter integration.

Again, presenting these options to customers will for sure remove the objections some of the teams may bring up either during Proof of Concept Projects, or in the whole purchasing decision process.

What about the reason some customers love appliances?

Up until now I’ve been talking all about TCO reduction as a key factor driving up SAP HANA adoption.

My main focus has been the OPEX part of the equation, but there are also customers concerned with the CAPEX part of it.

This is where having shared infrastructures for HANA and non-HANA applications come into play.

Again, virtualization of SAP HANA systems with hypervisors like VMware is also very relevant to this discussion.

But one factor that intrigued me from the start was some discussions I’ve observed where some argued against my blog post foreseeing the end of the appliance model for SAP HANA, that there are a lot of customers that do want to buy an appliance.

Before closing this blog post, I wanted to touch briefly this point.

Has anyone of you taken the time to ask questions to your customers and try understanding why those customers liked the appliance model?

Well, I took the opportunity of the many customers meeting and events I’ve attended over the last year to ask the question, and the response I got surprised me completely.

I got answers like:

·         I’m not an IT engineering company, don’t want to build systems on my own;

·         Engineering and integrating IT infrastructures on my own takes so much time and resources, that I want to get it already built;

·         It has happened to me buying components with factory defects that took me weeks and even data loss before I could find out the exact cause of that failure;

·         With all the SLAs the different teams have in my IT organization, from getting a system into my datacenter up to the moment I’ll be able to logon to the operating system and install an application, will take me no less than 3 months;

·         I want a single neck to choke in terms of infrastructure support problems;

·         I want a single provider to be responsible for the patching and integration of all the infrastructure components…

Well, the point is that none of the answers were SAP HANA specific!


So, I asked again:

·         What if you could have all that, but in an infrastructure that could be shared with all your other applications in the datacenter, and so avoiding building another silo?

Most customers answered without hesitation: that would be perfect!


But then argued that such a scenario was not possible, as all vendors are moving to single stack, siloed solutions, and so “between the fire and the pan”, they would rather go for those appliances.

I realized then that most of those customers were not familiar with the concept of converged infrastructures.

I would compare “Converged Infrastructures” with what has been the work of auto manufacturers over the last years:

·         They produce an end product ready to be used: a car;

·         They do not manufacture every single component of it;

·         But they provide warranty and integrated support to the full solution: the car as a whole;

·         They engineer the system, specify it, and procure the production to partners;

·         They are then responsible for the physical assembly as well as the logical configuration of the product (for example setting the default language for the electronic systems – GPS, car audio, etc)…



Figure 10: an example of a comparison of the value of a Converged Infrastructure for SAP HANA, against some of the currently available appliance offerings in the market.

Again, I do not have the pretension of having covered all the key variables in my analysis above. My goal here is to show that depending on the usage scenario of SAP HANA, and on the specific customer situation, there may be other variables that matter.

IT is evolving, and in the same way are evolving IT Infrastructure providers.

It’s my perspective that the maturity state IT has reached as enabled IT infrastructure providers to evolve from single component delivery, where customers were responsible to assemble and support the full solution, to a scenario where a provider provides already a “ready to drive” solutions, and one that can be driven by multiple persons (run multiple applications), as opposed for one that can only have a single unique designated driver (single application silo).

Converged Infrastructures provide to SAP HANA customers the best of the SAP HANA appliance model in a shared “factory built” infrastructure.

When making this comparison I’m referring to VCE’s VBlock Systems, which incorporate from factory all those EMC technologies I’ve been referring through this blog post, and so enlarging the Datacenter Integration options of SAP HANA, while driving down risk and complexity associated both with the SAP HANA Implementation Projects (the CAPEX part), as well as SAP HANA operations (the OPEX part).


                Conclusion

I believe all of these are great news to come out on the week is happening the SAPPHIRE 2014 in Orlando, as these announcements are an evidence of the maturity levels SAP HANA has reached, which prove it as a technology ready to reach the prime time of mission critical applications in our customer datacenters.

I hope that people advising customers on SAP HANA Datacenter Integration and its impact on operations learn about these new possibilities, in order to present the realistic view of today's SAP HANA openness and Datacenter Readiness in it full scale..

Fortunately there are many around the world that have understood the importance of being up to date on the datacenter integration options of SAP HANA (not only from SAP but from partners as well), and I’ve been very privileged to be some SAP employees in customer events and discussions where this topics were openly approached and discussed, having those discussions contributed decisively to help those account teams bring to successful closure SAP HANA deals that were “on the hoven” for quite some time without progress.

Also, all these news are another evidence of the major focus the EMC Corporation is putting in further enhancing the SAP HANA datacenter readiness, and through that also drive its increased adoption.

What I’ve touched here represents only a small part of all the things EMC has been working on, and there are a couple of things I know for sure:

·         The best news from EMC are yet to come (DSSD acquisition coming to market, and others…);

·         Competition will only increase on the SAP HANA space, having competitors running fast to catch up on this announcements, which will further increase the Datacenter Integration and Openness of SAP HANA, contributing for an accelerated pace of HANA adoption over the coming years.


If you were brave enough to read through all this very long email post, do not hesitate to get in touch with me for comments or questions!

All perspectives will be highly appreciated.


For reference, check out the following links: