These are the most challenging times most SAP customers and
service providers have lived in the last 20 years, since SAP transitioned from
the mainframe world to the client-server world through the evolution from R/2
to the next generation of software announced in 1992 that was the R/3 system!
And the challenge comes associated with the fact that with
the announcement from SAP that the future SAP ERP will be called S/4 and will
only run on HANA, customers get faced with the fact that, wanting to keep using
SAP Software for the long run, the adoption of SAP HANA is no longer a matter
of “if” but rather a matter of how and when.
Also the emergence of Cloud and the multiple of options it
entails, pushed by very different and sometimes conflicting messages from all
the providers in the market, further adds to the challenge of making decisions
on what is the best IT strategy and architecture to serve the business needs
for the future.
Mix these with a very complex economic environment, with an
increasingly de-regulated economy, where the easiness of movement of
information, capital, people and goods across the globe is unprecedented,
demanding for organizations to step up their business models to survive, in a
way that they become prepared either to build and shape markets or to rapidly
adapt to fast changing market conditions. In such an environment Information
Technologies are more than ever a critical component of competitiveness.
No business organization can live today without information.
IT organizations need to make sense of all the “noise”
happening in the market to find what is the most appropriate choice for their
particular situation - today and for the future.
Through this post, I’ll share my understanding on a
number of changes happening, in a way that IT leaders can leverage them for a
pragmatic and business wise decision process.
This blog aims to demystify these concepts to those readers not yet familiar with this topic.
SAP HANA, S/4 and the Internet of Things
In fact, it’s this reality of increased competitiveness that is driving for the emergence of new paradigms like “real time” and “automated machine-managed” business decision making.
As information, and the knowledge derived from that
information, become increasingly critical for organizations, the next frontier
is to be able to set rules for a business that – based on real-time data –
makes it possible to steer decisions towards what is happening now or even what
are the most likely future business conditions, and not only based on the
information of what happened in the past.
Multiple events over the last decades have demonstrated that
in just a few days, the business environment may observe changes so dramatic, that
what might be a good decision based on the information of just 1 week ago, may now
lead you to the abyss.
Also, the evolution of technology that enables today most
devices to have communication capabilities and to generate data on their status
and activity, associated with the increased availability of “public relevant
business information”, like online exchange rates and whether conditions, makes
that the volumes of data available for the organizations to analyze and decide
upon, are exploding. In this space the buzz words of the moment are:
- Internet Of Things (IoT): referring to availability of “data generating devices” connected online and generating information to be processed by business systems;
- Social: referring for example to data on users activity on the internet, that represent as well massive volumes and variety of data;
- Big Data: that refers to the massive volumes of data, its massive variety and different value, which organizations need today to tackle with generated by all of the above.
All of this comes tied in to the massive growth of “mobile”,
another buzz word that highlights the fact that today business users want –
like private users – to have access in real time at their fingertips about what
is happening with their businesses, where waiting hours or even minutes for the
information to show up on the screen is not acceptable.
S/4 is SAP’s response to this need for organizations to
operate in real time and to manage in an integrated way, not only the
structured data that business systems have managed for the last two decades,
but also the big data generated by Social and IoT. And S/4 is only powered by
SAP HANA, as traditional databases are not able today to provide the
capabilities to respond to this new reality.
It is no longer enough for business managers to know what
happened in the past. Today is fundamental to know what is happening now – in
real time – and based on trends and the inter-relation between multiple
variables – internal and external – to forecast what the future may be,
enabling a “what if & future looking” decision making.
It was because of this reality that SAP has come up with SAP
HANA, a platform that merges transaction processing, operational reporting, and
prospective analysis – all mobile enabled – on the same platform.
As you know, SAP HANA is way more than a database; it is
indeed a data management platform architected for the current business
environment of “uncertainty and fast pace of chance”.
To be able to tackle with this business reality, SAP broke
with HANA a quite some long time computing paradigms:
- In-memory computing: store large volumes of business data in-memory for faster processing and “second grade” response times for mobile apps;
- Merge OLAP and OLTP: do transaction processing and analytics on the same platform enabling real-time prospective analysis upon current transactional data;
- Social and IoT: Integrated processing of structured and unstructured data, merging information from machine, social and the existing business structured data, to drive new business models and faster “time to action”.
Not wanting to cover all these in detail, as they have been
widely communicated by SAP and each of these three aspects bring new variables
to the table that SAP Architects haven’t faced until now, let us just spend
some paragraphs on it, to explain their implications.
In-memory computing
In-memory computing has been the most communicated characteristic of SAP HANA, as the basis for “faster data processing”.
Why is that?
In the late 90’s, one of the key barriers to enable computer
systems to analyze and process larger amounts of data faster, was the limited
computing capacity of existing systems.
In the end this is all a matter of affordability, as the
cost of IT cannot be higher than the benefit IT brings to the business.
So, if you want to process a certain volume of data, you might
need to spend such amounts on computing capacity, that a certain scenario
wouldn’t just be sustainable, as the market might not pay the cost of such a
solution.
This has made that for many years, a significant level of
innovation brought to computer systems was targeted at increasing the computing
power (the ability to process larger amounts of data faster) reflected in
aspects like the increase of CPU clock speeds, increased capacity of RAM chips,
reduced latency and increased bandwidth on the communication between CPU and
RAM, alongside with a significant increase size in the CPU internal cache
(working memory inside the CPU itself).
As the volumes of data managed by organizations increase, alongside
with this massive increase in computing capacity, the bottleneck in computing
systems has moved to data movement and transmission.
So, this movement opened the space to change paradigms:
instead of storing data further away from the CPU, why not starting to store it
as closer as possible? Instead of using the RAM just as a temporary buffer to
hold data being processed, why not use RAM as a permanent store for data?
SAP was visionary in seeing the opportunity for this paradigm
change, by bringing to market SAP HANA.
For organizations using RAM as a permanent store of data,
that is accessible by the CPU faster than ever, means that with SAP HANA,
they’ll be able to analyze more data faster, and so it opens the possibility of
streaming in real time information from what is happening in the business, and
make “automated” machine managed / rules based business decisions in real time.
Being the bottleneck these days on data transmission, but
considering that the ratio of analyzed data for a business decision may only
imply that less than 10% of it is newly generated data, we are definitely up
for some significant paradigm shifts.
Note as well, that this idea has a significant lateral
implication: now it’s possible to make this data accessible on mobile devices
with “second grade” response times.
In an economic environment where uncertainty is the only
thing businesses have as certain; this is definitely an edge for many
organizations. But it will have implications on IT infrastructure requirements,
which will translate in impacts on other business variables like risk and cost
for example. We’ll explore these impacts further ahead in the document.
As SAP HANA stores data in RAM, which is a type of
non-persistent memory and is internal to a computer, questions come up like:
how do I protect this system from disaster, or how to I recover from a failure,
what volume is affordable to store in memory for my business scenario, how do I
operate and evolve such an environment, what communications architecture will I
need to put in place to tackle these new volumes of processed and transmitted
data.
Merge OLTP and OLAP
The other characteristic communicated in regards to SAP HANA innovation, is the merge of OLTP and OLAP on the same platform.
In the past, for the same reasons referred on the previous
point of affordability, software vendors designed two different platforms to
make different types of processing for the business.
OLTP stands for online transaction processing. This system
processed the transactions of what is happening in the business.
OLAP stands for online analytical processing. These systems
were meant for data analysis and decision support.
Why were these two separate? Because, considering the
limitations of past computer system architectures, it happened that performing
the processing needed for the business analysis consumed all the resources on
the system, making it that being both on the same platform, your business
operations (transaction processing) would be negatively impacted. So, I would
say this is one of the big reasons SAP has come up with the BW system in the
early 2000’s.
So, vendors came with 2 different platform designs each one
optimized to a type of processing, and built processes both to keep them
isolated and minimize impacts of one on the other, while keeping them as
aligned in terms of information as possible.
Again, this was another situation of computer systems
innovation driven by business needs, but at the same time business functionality
limited by technology limitations.
The exploding volumes of information organizations are
managing for decision making, and the need to have increasingly more up-to-date
information, has taken this model of OLAP and OLTP on separate systems reach
its limit. And is clear that the “OLAP and OLTP on separate systems model” has
reached its limit by observing the number of organizations where the 24 hours
of a day are no longer enough to extract data from transaction processing
systems, load it into analytic systems, and report on that data.
Here, SAP, leveraging a number of technology innovations saw
also an opportunity to innovate, and by leveraging the massive computing capabilities
that today’s systems have which have evolved a lot faster than the improvements
in data communications, and by realizing that what makes today the separation
of OLAP and OLTP reach its limit is the limitation on data movement and
transmission, SAP designed a system that would be able to perform both
simultaneously.
You may argue that SAP is going back to the model that
existed 20 years ago. And we would say you are right. But 20 years in terms of
business, is a lot of time. And the fact is that today the limitation is on
data transmission and not on data processing like it was 20 years ago. So, who
knows how long it will take for this balance to chance again.
This will also bring some questions like: how do we scale
these systems to ensure we don’t fall again on the problems that led to
separate OLAP and OLTP in the first place? How do we avoid these systems to
become so huge that become unmanageable or that making them resilient becomes
simply unaffordable.
Big Data generated by Social and IoT
The last topic we want to touch is the growth of importance of unstructured data.
There has been processing of unstructured data for many
years. For example, in the manufacturing industries, machines brought
automation systems that generated files with logs of their activity. And in the
90’s there were already organizations that integrated this data in to business
systems, for example to have automatic information of the produced quantities
of goods.
The challenge with this data, generated by machines or by
social media, is that it usually is not structured, unlike the business data we
are used to process in the SAP world through tables and field definitions.
In this new world of Social and IoT we may be talking about
images, audio files, videos and log files.
Being able to act upon these types of data, and integrating
the knowledge extracted from them with the structured business knowledge we
have on our transaction processing systems, may enable organizations in many
industries to achieve cost savings and develop business innovations that set
them apart from their competition.
The challenge for SAP architects is that the volumes and
variety of data involved here are massive compared with what we were used to
deal on the SAP Netweaver ABAP type of reality.
Just as an example, in the Netweaver world, to ensure
performance, availability and resiliency on a database of 50 to 100 TB was a
nightmare, noteworthy as most organizations work hard to contain growth and
only very few in the world reach such volumes. In the non-structured data
world, we may easily reach volumes in the “Petabyte scale”.
So this brings questions like: if it was challenging to
ensure performance, availability and resiliency to SAP Application Landscapes
in the Netweaver world, how to we respond to these needs when dealing with
petabytes of unstructured data? And what is the most affordable way to store
and process this type of data? Can it be in-memory? Will its volume overwhelm
the transaction processing so impacting business operations?
The challenge for SAP Architects from S/4, HANA and
IoT
It is clear that there will be new challenges coming up to the hands of SAP Architects, and challenges that involve variables that most SAP Architects haven’t dealt with up until now.
There is then the need to make sense of all of this, and
understand in what ways S/4 systems, and in general SAP HANA systems being or
not in IoT scenarios, need to be architected to respond to the ever challenge
of architects: design systems that respond to the business needs in a
sustainable and affordable way.
The contraints are the same IT architects have been asked to respond to so far:
- Cost (of implementation, of operation and of change);
- Performance / stability;
- Availability;
- Security / recoverability.
Then the question is: what solutions respond to these constraints in the reality of
S/4, SAP HANA and IoT?
Let's continue this discussion on my next blog post.
No comments:
Post a Comment