Wednesday, March 06, 2019

Data Sovereignty, Security, and Performance Panacea: Why Mastercard Sets the Standard for Global Hybrid Cloud Adoption

https://www.mastercard.us/en-us.html
Transcript of a discussion on how a major financial transactions provider is exploiting cloud models to extend a distributed real-time payment capability across the globe despite some of the strictest security and performance requirements.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into hybrid IT and cloud computing.

Gardner
Our next cloud adoption best practices discussion focuses on some of the strictest security and performance requirements for a new global finance services deployment. We’ll now explore how a major financial transactions provider is exploiting cloud models to extend a distributed real-time payment capability across the globe.

Due to the needs for localized data storage, privacy regulations compliance, and lightning-fast transactions speeds, this extreme cloud-use formula pushes the boundaries -- and possibilities -- for cloud solutions.

Stay with us now as we hear from an executive at Mastercard and a cloud deployment strategist about a new, cutting-edge use for cloud infrastructure. Please join me now in welcoming our guests, Paolo Pelizzoli, Executive Vice President and Chief Operating Officer at Realtime Payments International for Mastercard. Welcome, Paolo.


Paolo Pelizzoli: Thank you.

Gardner: We’re also here with Robert Christiansen, Vice President and Cloud Strategist at Cloud Technology Partners (CTP), a Hewlett Packard Enterprise (HPE) company. Welcome, Robert.

Robert Christiansen: Thank you for having me. Good to be here.
Learn More About Software-Defined and
Hybrid Cloud Solutions
That Reduce Complexity
Gardner: What is happening with cloud adoption that newly satisfies such major concerns as strict security, localized data, and top-rate performance? Robert, what’s allowing for a new leading edge when it comes to the public clouds’ use?

Christiansen: A number of new use cases have been made public. For the front runners like Capital One [Financial Corp.], and some other organizations, they have taken core applications that would otherwise be considered sacred and are moving them to cloud platforms. Those have become more-and-more evident and visible. The Capital One CIO, Robert Alexander, has been very vocal about that.

Christiansen
So now others have followed suit. And the US federal government regulators have been much more accepting around the audit controls. We are seeing a lot more governance and automation happening as well. A number of the business control objectives – from security to the actual technologies to the implementations -- are becoming more accepted practices today for cloud deployment.

So, by default, folks like Paolo at Mastercard are considering the new solutions that could give them a competitive edge. We are just seeing a lot more acceptance of cloud models over the last 18 months.

Gardner: Paolo, is increased adoption a matter of gaining more confidence in cloud, or are there proof points you look for that opens the gates for more cloud adoption?

Compliance challenges cloud

Pelizzoli: As we see what’s happening in the world around nationalism, the on-the-soil [data sovereignty] requirements have become much more prevalent. It will continue, so we need the ability to reach those countries, deploy quickly, and allow data persistence to occur there.

Pelizzoli
The adoption side of it is a double-edged sword. I think everybody wants to get there, and everybody intuitively knows that they can get there. But there are a lot of controls around privacy, as well as the SOX and SOC 1 reports compliance, and everything else that needs to be adjusted to take into the cloud into account. And if the cloud is rerouting traffic because one zone goes down and it flips to another zone, is that still within the same borders, is it still compliant, and can you prove that?

So while technologically this all can be done, from a compliance perspective there are still a lot of different boxes left to check before someone can allow payments data to flow actively across the cloud -- because that’s really the panacea.

Gardner: We have often seen a lag between what technology is capable of and what regulations, standards, and best practices allow. Are we beginning to see a compression of that lag? Are regulators, in effect, catching up to what the technology is capable of?

Pelizzoli: The technology is still way out in the front. The regulators have a lot on their plates. We can start moving as long as we adhere to all the regulations, but the regulations between countries and within some countries will continue to have a lagging effect. That being said, you are beginning to see governments understand how sanctions occur and they want their own networks within their own borders.

Those are the types of things that require a full-fledged payments network that predated the public Internet to begin to gain certain new features, functions, and capabilities. We are now basically having to redo that payments-grade network.

https://www.mastercard.us/en-us.html
Gardner: Robert, the technology is highly capable. We have a major player like Mastercard interested in solving their new globalization requirements using cloud. What can help close the adoption gap? Does hybrid cloud help solve the log-jam?

Christiansen: The regionalization issues are upfront, if not the number-one requirement, as Paolo has been talking about. I think about South Korea. We just had a meeting with the largest banking folks there. They are planning now for their adoption of public cloud, whether it’s Microsoft Azure, Amazon Web Services (AWS), or Google Cloud. But the laws are just now making it available.

Prior to January 1, 2019, the laws prohibited public cloud use for financial services companies, so things are changing. There is lot of that kind of thing going on around the globe. The strategy seems to be very focused on making the compute, network, and storage localized and regionalized. And that’s going to require technology grounding in some sort of connectivity across on-premises and public, while still putting the proper security in-place.
Learn More About Software-Defined and
Hybrid Cloud Solutions
That Reduce Complexity
So, you may see more use of things like OpenShift or Cloud Foundry’s Pivotal platform and some overlay that allows folks to take advantage of that so that you can push down an appliance, like a piece of equipment, into a specific territory.

I’m not certain as to the cost that you incur as a result of adding such an additional local layer. But from a rollout perspective, this is an upfront conversation. Most financial organizations that globalize want to be able to develop and deploy in one way while also having regional, localized on-premises services. And they want it to get done as if in a public cloud. That is happening in a multiple number of regions.

Gardner: Paolo, please tell us more about International Realtime Payments. Are you set up specifically to solve this type of regional-global deployment problem, or is there a larger mandate? What’s the reason for this organization?

Hybrid help from data center to the edge

Pelizzoli: Mastercard made an acquisition a number of years ago of Vocalink. Vocalink did real-time secure interbank funds transfer, and linkage to the automated clearing house (ACH) mechanism for the United Kingdom (UK), including the BACS and LINK extensions to facilitate payments across the banking system. Because it’s nationally critical infrastructure, and it’s bank-to-bank secure funds transfer with liquidity checks in place, we have extended the capabilities. We can go through and perform the same nationally critical functions for other governments in other countries.

Vocalink has now been integrated into Mastercard, and Realtime Payments will extend the overall reach, to include the debit/credit loyalty gift “rails” that Mastercard has been traditionally known for.

I absolutely agree that you want to develop one way and then be able to deploy to multiple locations. As hybrid cloud has arrived, with the advent of Microsoft Azure Stack and more recently AWS’s Outposts, it gives you the cloud inside of your data center with the same capabilities, the same consoles, and the same scripting and automation, et cetera.

As we see those mechanisms become richer and more robust, we will go through and be deploying that approach to any and all of our resources -- even being embedded at the edge within a point of sale (POS) device.
As we examine the different requirements from government regulations, it really comes down to managing personally identifiable information.

As we examine the different requirements from government regulations, it really comes down to managing personally identifiable information.

So, if you can secure the transaction information, by abstracting out all the other stuff and doing some interesting cryptography that only those governments know about, the [transaction] flow will still go through [the cloud] but the data will still be there, at the edge, and on the device or appliance.

We already provide for detection and other value-added services for the assurance of the banks, all the way down to the consumers, to protect them. As we start going through and seeing globalization -- but also the regionalization due to regulation – it will be interesting to uncover fraudulent activity. We already have unique insights into that.

No more noisy neighbors

Christiansen: Getting back to the hybrid strategy, AWS Outposts and Azure Stack have created the opportunity for such globalization at speed. Someone can plug in a network and power cable and get a public cloud-like experience yet it’s on an on-premises device. That opens a significant number of doors.

You eliminate multi-tenancy issues, for example, which are a huge obstacle when it comes to compliance. In addition, you have to address “noisy neighbor” issues, performance issues, failovers, and stuff like that that are caused by multi-tenancy issues.

If you’re able to simply deploy a cloud appliance that is self-aware, you have a whole other trajectory toward use of the cloud technology. I am actively encouraged to see what Microsoft and Amazon can do to press that further. I just wanted to tag that onto what Paolo was talking about.

Pelizzoli: Right, and these self-contained deployments can use Kubernetes. In that way, everything that’s required to go through and run autonomously -- even the software-defined networks (SDNs) – can be deployed via containers. It actually knows where its point of persistence needs to be, for data sovereignty compliance, regardless of where it actually ends up being deployed.

This comes back to an earlier comment about the technology being quite far ahead. It is still maturing. I don’t think it is fully mature to everybody’s liking yet. But there are some very, very encouraging steps.

As long as we go in with our eyes wide open, there are certain things that will allow us to go through and use those technologies. We still have some legacy stuff pinned to bare-metal hardware. But as things start behaving in a hybrid cloud fashion as we’re describing, and once we get all the security and guidelines set up, we can migrate off of those legacy systems at an accelerated pace.

Gardner: It seems to me that Realtime Payments International could be a bellwether use case for such global hybrid cloud adoption. What then are the checkboxes you need to sign off on in order to be able to use cloud to solve your problems?

Perpetual personal data protection

Pelizzoli: I can’t give you all the criteria, but the persistence layer needs to be highly encrypted. The transports need to be highly encrypted. Every time anything is persisted, it has to go through a regulatory set of checks, just to make sure that it’s allowed to do what it’s being asked to do. We need a lot of cleanliness in the way metrics are captured so that you can’t use a metric to get back to a person.

If nothing else, we have learned a lot from the recent [data intrusion] announcements by Facebook, Marriott, and others. The data is quite prevalent out there. And payments data, just like your hospital data, is the most personal.

As we start figuring out the nuances of regulation around an individual service, it must be externalized. We have to be able to literally inject solutions to regulatory requirements – and not by coding it. We can’t be creating any payments that are ambiguous.
Learn More About Software-Defined and
Hybrid Cloud Solutions
That Reduce Complexity
That’s why we are starting to see a lot of effort going into how artificial intelligence (AI) can help. AI could check services and configurations to test for every possibility so that there isn’t a “hole” that somebody can go through with a certain amount of credentials.

As we go forward, those are the types of things that -- when we are in a public cloud -- we need to account for. When we were all internal, we had a lot of perimeter defenses. The new perimeter becomes more nebulous in a public cloud. You can create virtual private clouds, but you need to be very wary that you are expanding time factors or latency.

Gardner: If you can check off these security and performance requirements, and you are able to start exploiting the hybrid cloud continuum across different localities, what do you get? What are the business outcomes you’re seeking?

Common cloud consistency

Pelizzoli: A couple of things. One is agility, in terms of being able to deploy to two adjacent countries, if one country has a major outage. That means ease of access to a payments-grade network -- without having to go through and put in hardware, which will invariably fail.

Also, the ability to scale quickly. There is an expected peak season for payments, such as around the Christmas holidays. But there could be an unexpected peak season based on bad news -- not a peak season, but a peak day. How do you go through and have your systems scale within one country that wasn’t normally producing a lot of transactions? All of a sudden, now it’s producing 18 times the amount of transactions.

Those types of things give us a different development paradigm. We have a lot of developers. A [common cloud approach] would give us consistency, and the ability to be clean in how we automate deployment; the testing side of it, the security checks, etc.


Before, there were a lot of different ways of doing development, depending on the language and the target. Bringing that together would allow increased velocity and reduced cost, in most cases. And what I mean by “most cases” is I can use only what I need and scale as I require. I don’t have to build for the worst possible day and then potentially never hit it. So, I could use my capacity more efficiently.

Gardner: Robert, it sounds like major financial applications, like a global real-time payment solution, are getting from the cloud what startups and cloud-native organizations have taken for granted. We’re now able to take the benefits of cloud to some of the most extreme and complex use cases.

Cloud-driven global agility

Christiansen: That’s a really good observation, Dana. A healthcare organization could use the same technologies to leverage an industrial-strength transaction platform that allows them to deliver healthcare solutions globally. And they could deem it as a future-proof infrastructure solution.

One of the big advantages of the public cloud has been the isolation of all those things that many central IT teams have had to do day-in and day-out. That is to patch releases, upgrade processes, constantly looking at the refresh. They call it painting the Golden Gate Bridge – where once you finish painting the bridge, you have to go back and do it all over again. And a lot of that effort and money goes into that refresh process.

And so they are asking themselves, “Hey, how can we take our $3 or $4 billion IT spend, and take x amount of that and begin applying it toward innovation?”
Right now there is so much rigidity. Everyone is asking the same question, "How do I compete globally in a way that allows me to build the agility transformation into my organization?"

And if someone can take a piece out of that equation, all things are eligible. Everyone is asking the same question, “How do I compete globally in a way that allows me to build the agility transformation into my organization?” Right now there is so much rigidity, but the balance against what Paolo was talking about -- the industrial-grade network and transaction framework -- to get this stuff done cannot be relinquished.

So people are asking a lot of the same questions. They come in and ask us at CTP, “Hey, what use-cases are actually in place today where I can start leveraging portions of the public cloud so I can start knocking off pieces?”

Paolo, how do you use your existing infrastructure, and what portion of cloud enablement can you bring to the table? Is it cloud-first, where you say, “Hey, everything is up for grabs?” Or are you more isolated into using cloud only in a certain segment?

Follow a paved path of patterns

Pelizzoli: Obviously, the endgame is to be in the cloud 100 percent. That’s utopian. How do we get there? There is analysis being done. It depends if we are talking about real-time payments, which is actually more prepared to go into the cloud than some of the core processing that handles most of North America and Europe from an individual credit card or debit card swipe. Some of those core pieces need more rewiring to take advantage of the cloud.

When we look at it, we are decomposing all of the legacy systems and seeing how well they fit in to what we call a paved path of patterns. If there is a paved path for a specific type of pattern, we put it on the list of things to transition to, as being built as a cloud-native service. And then we run it alongside its parent for a while, to test it, through stressful periods and through forced chaos. If the segment goes down, where does it flip over to? And what is the recovery time?

https://www.mastercard.us/en-us.html
The one thing we cannot do is in any way increase latency. In fact, we have some very aggressive targets to reduce latency wherever we can. We also want to improve the recovery and security of the individual components, which we end up calling value-added services.

There are some basic services we have to provide, and then value-added services, which people can opt in or opt out of. We do have a plan and strategy to go through and prioritize that list.

Gardner: Paolo, as you master hybrid cloud, you must have visibility and monitoring across these different models. It’s a new kind of monitoring, a new kind of management.

What do you look to from CTP and HPE to help attain new levels of insight so you can measure what’s going on, and therefore optimize and automate?

Pelizzoli: CTP has been a very good and integral part of our first steps into the cloud.

Now, I will give you one disclaimer. We have some companies that are Mastercard companies that are already in the cloud, and were born in the cloud. So we have experience with AWS, we have experience with Azure, and we have some experience with Google Cloud Platform.

It’s not that Mastercard isn’t in the cloud already, it is. But when you start taking the entire plant and moving it, we want to make sure that the security controls, which CTP has been helping ratify, get extended into the cloud -- and where appropriate, actually removed, because there are better ones in the cloud today.

Extend the cloud management office

Now, the next phase is to start building out a cloud management office. Our cloud management office was created early last year. It is now getting the appropriate checks and audits from finance, the application teams, the architecture team, security teams, and so on.

As that list of prioritized applications comes through, they have the appropriate paved path, checks, and balance. If there are any exceptions, it gets fiercely debated and will either get a pass or it will not. But even if it does not, it can still sit within our on-premises version of the cloud, it’s just more protected.

As we route all the traffic, that is where there is going to be a lot of checks within the different network hops that it has to take to prevent certain information from getting outside when it’s not appropriate.

Gardner: And is there something of a wish list that you might have for how to better fulfill the mandate of that cloud management office?

Pelizzoli: We have CTP, which HPE purchased along with RedPixie. They cover, between those two acquisitions, all of the public cloud providers.

https://www.mastercard.us/en-us.html

Now, the cloud providers themselves are selling you the next feature-function to move themselves ahead of their competitor. CTP and RedPixie are taking the common denominator across all of them to make sure that you are not overstepping promises from one cloud provider into another cloud provider. You are not thinking that everybody is moving at the same pace.

They also provide implementation capabilities, migration capabilities, and testing capabilities through the larger HPE organization. The fact is we have strong relationships with Microsoft and with Amazon, and so does HPE. If we can bring the collective muscle of Mastercard, HPE, and the cloud providers together, we can move mountains.

Gardner: We hear folks like Paolo describe their vision of what’s possible when you can use the cloud providers in an orchestrated, concerted, and value-added approach.

Other people in the market may not understand what is going on across multi-cloud management requirements. What would you want them to know, Robert?

O brave new hybrid world

Christiansen: A hybrid world is the true reality. Just the complexity of the enterprise, no matter what industry you are in, has caused these application centers of gravity. The latency issues between applications that could be moved to cloud or not, or impacted by where the data resides, these have created huge gravity issues, so they are unable to take advantage of the frameworks that the public clouds provide.

So, the reality is that the public cloud is going to have to come down into the four walls of the enterprise. As a result of that, we are seeing an explosion of the common abstraction -- there is going to be some open sourced framework for all clouds to communicate and to talk and behave alike.

Over the past decade, the on-premises and OpenStack world has been decommissioning the whole legacy technology stack, moving it off to the side as a priority, as they seek to adopt cloud. The reality now is that we have regional, government, and data privacy issues, we have got all sorts of things that are pulling it all back internally again.

Out of all this chaos is going to rise the phoenix of some sort of common framework. There has to be. There is no other way out of this. We are already seeing organizations such as Paolo’s at Mastercard develop a mandate to take the agile step forward.

They want somebody to provide the ability to gain more business value versus the technology, to manage and keep track of infrastructure, and to future-proof that platform. But at the same time, they want a technology position where they can use common frameworks, common languages, things that give interoperability across multiple platforms. That’s where you are seeing a huge amount of investment.

I don’t know if you recently saw that HashiCorp got $100 million in additional funding, and they have a valuation of almost $2 billion. This is a company that specializes in sitting in that space. And we are going to see more of that.
Learn More About Software-Defined and
Hybrid Cloud Solutions
That Reduce Complexity
And as folks like Mastercard drive the requirements, the all-in on one public cloud mentality is going to quickly evaporate. These platforms absolutely have to learn how to play together and get along with on-premises, as well as between themselves.

Gardner: Paolo, any last thoughts about how we get cloud providers to be team players rather than walking around with sharp elbows?

Tech that plays well with others

Pelizzoli: I think it’s actually going to end up being a lot more of the technology that’s being allowed to run on these cloud platforms is going to take care of it.

I mentioned Kubernetes and Docker earlier, and there are others out there. The fact that they can isolate themselves from the cloud provider itself is where it will neutralize some of the sharp elbowing that goes on.

Now, there are going to be features that keep coming up that I think companies like ours will take a look at and start putting workloads where the latest cutting-edge feature gives us a competitive advantage and then wait for other cloud providers to go through and catch up. And when they do, we can then deploy out on those. But those will be very conscious decisions.

I don’t think that there is a one cloud fits all, but where appropriate we will go through and be absolutely multi-cloud. Where there is defining difference, we will go through and select the cloud provider that best suits in that area to cover that specific capability.

Gardner: It sounds like these extreme use cases and the very important requirements that organizations like Mastercard have will compel this marketplace to continue to flourish rather than become a one-size-fits-all. So an interesting time that we are seeing the maturation of the applications and use cases actually start to create more of a democratization of cloud in the marketplace.

I’m afraid we will have to leave it there. We’ve been exploring how a major financial transactions provider is exploiting cloud models to extend and distribute real-time payments capacity across the globe. And we have learned how the need for localized data storage and privacy regulations, compliance, and lightning-fast transaction speeds are pushing the boundaries of what cloud solutions can do.


So please join me in thanking our guests, Paolo Pelizzoli, Executive Vice President and Chief Operating Officer at Realtime Payments International for Mastercard. Thank you so much, Paolo.

Pelizzoli: Thank you very much. I really appreciate it.

Gardner: And we have also been joined by Robert Christiansen, Vice President and Cloud Strategist at Cloud Technology Partners, a Hewlett Packard Enterprise Company. Thank you, Robert.

Christiansen: Thank you so much. I appreciate it.

Gardner: And a big thank you as well to our audience for joining this BriefingsDirect Voice of the Customer hybrid IT and cloud computing strategies interview.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored discussions. Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how a major financial transactions provider is exploiting cloud models to extend a distributed real-time payment capability across the globe despite some of the strictest security and performance requirements. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.


You may also be interested in:

Tuesday, March 05, 2019

Where the Rubber Meets the Road: How Users See the IT4IT Standard Building Competitive Business Advantage

 
Transcript of a panel discussion on how the IT4IT Reference Architecture for IT management works in many ways for many types of organizations and the demonstrated business benefits that are being realized as a result.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Our next IT operations strategy panel discussion explores how the IT4IT[tm] Reference Architecture for IT management creates demonstrated business benefits – in many ways, across many types of organizations.

Gardner
Since its delivery in 2015 by The Open Group, IT4IT has focused on defining, sourcing, consuming, and managing services across the IT function’s value stream to its stakeholders. Among its earliest and most ardent users are IT vendors, startups, and global professional services providers.

To learn more about how this variety of highly efficient businesses and their IT organizations make the most of IT4IT – often as a complimentary mix of frameworks and methodologies -- we are now joined by our panel:
Welcome to you all. Big trends are buffeting business in 2019. Companies of all kinds need to attain digital transformation faster, make their businesses more intelligent and responsive to their markets, and improve end user experiences. So, software development, applications lifecycles, and optimizing how IT departments operate are more important than ever. And they need to operate as a coordinated team, not in silos.

Lars, why is the IT4IT standard so powerful given these requirements that most businesses face? 

One framework to rule them all

Rossen
Rossen: There are a number of reasons, but the starting point is the fact that it’s truly end-to-end. IT4IT starts from the planning stage -- how to convert your strategy into actionable projects that are being measured in the right manner -- all the way to development, delivery of the service, how to consume it, and at the end of the day, to run it.

There are many other frameworks. They are often very process-oriented, or capability-oriented. But IT4IT gives you a framework that underpins it all. Every IT organization needs to have such a framework in place and be rationalized and well-integrated. And IT4IT can deliver that.

Gardner: And IT4IT is designed to help IT organizations elevate themselves in terms of the impact they have on the overall business.

Mark, when you encounter someone who says IT4IT, “What is that?” What’s your elevator pitch, how do you describe it so that a lay audience can understand it?

Bodman
Bodman: I pitch it as a framework for managing IT and leave it at that. I might also say it’s an operating model because that’s something a chief information officer (CIO) or a business person might know.

If it’s an individual contributor in one of the value streams, I say it’s a broader framework than what you are doing. For example, if they are a DevOps guy, or a maybe a Scaled Agile Framework (SAFe) guy, or even a test engineer, I explain that it’s a more comprehensive framework. It goes back to the nature of IT4IT being a hub of many different frameworks -- and all designed as one architecture.

Gardner: Is there an analog to other business, or even cultural, occurrences that IT4IT is to an enterprise?

Rossen: The analogy I have is that you go to The Lord of the Rings, and IT4IT is the “one ring to rule them all.” It actually combines everything you need.

Gardner: Why do companies need this now? What are the problems they’re facing that requires one framework to rule them all?

Everyone, everything on the same page

Esler
Esler: A lot of our clients have implemented a lot of different kinds of software -- automation software, orchestration software, and portals. They are sharing more information, more data. But they haven’t changed their operating model.

Using IT4IT is a good way to see where your gaps are, what you are doing well, what you are not doing not so well, and how to improve on that. It gives you a really good foundation on knowing the business of IT.

Bennett: We are hearing in the field is that IT departments are generally drowning at this point. You have a myriad of factors, some of which are their fault and some of which aren’t. The compliance world is getting nightmare-strict. The privacy laws that are coming in are straining what are already resource-constrained organizations. At the same time, budgets are being cut.

The other side of it is the users are demanding more from IT, as a strategic element as opposed to simply a support organization. As a result, they are drowning on a daily basis. Their operating model is -- they are still running on wooden wheels. They have not changed any of their foundational elements.

If your family has a spending problem, you don’t stop spending, you go on a budget. You put in an Excel spreadsheet, get all the data into one place, pull it together, and you figure out what’s going on. Then you can execute change. That’s what we do from an IT perspective. It’s simply getting everything in the same place, on the same page, and talking the same language. Then we can start executing change to survive.
Gardner: Because IT in the past could operate in silos, there would be specialization. Now we need a team-sport approach. Mark, how does IT4IT help that?

Bodman: An analogy is the medical profession. You have specialists, and you have generalist doctors. You go to the generalist when you don’t really know where the problem is. Then you go to a specialist with a very specific skill-set and the tools to go deep. IT4IT has aimed at that generalist layer, then with pointers to the specialists.

Gardner: IT4IT has been available since October 2015, which is a few years in the market. We are now seeing different types of adoption patterns—from small- to medium-size businesses (SMBs) and up to enterprises. What are some “rubber meets the road” points, where the value is compelling and understood, that then drive this deeper into the organization?

Where do you see IT4IT as an accelerant to larger business-level improvements?

Success via stability

Vijaykumar
Vijaykumar: When we look at the industry in general there are a lot of disruptive innovations, such as cloud computing taking hold. You have other trends like big data, too. These are driving a paradigm shift in the way IT is perceived. So, IT is not only a supporting function to the business anymore -- it’s a business enabler and a competitive driver.

Now you need stability from IT, and IT needs to function with the same level of rigor as a bank or manufacturer. If you look at those businesses, they have reference architectures that span several decades. That stability was missing in IT, and that is where IT4IT fills a gap -- we have come up with a reference architecture.

What does that mean? When you implement new tooling solutions or you come up with new enterprise applications, you don’t need to rip apart and replace everything. You could still use the same underlying architecture. You retain most of the things -- even when you advance to a different solution. That is where a lot of value gets created.

Esler: One thing you have to remember, too, is that this is not just about new stuff. It’s not just about artificial intelligence (AI), Internet of Things (IoT), big data, and all of that kind of stuff -- the new, shiny stuff. There is still a lot of old stuff out there that has to be managed in the same way. You have to have a framework like IT4IT that allows you to have a hybrid environment to manage it all.

https://publications.opengroup.org/it4it
Gardner: The framework to rule all frameworks.

Rossen: That also goes back to the concept of multi-modal IT. Some people say, “Okay, I have new tools for the new way of doing stuff, and I keep my old tools for the old stuff.”

But, in the real world, these things need to work together. The services depend on each other. If you have a new smart banking application, and you still have a COBOL mainframe application that it needs to communicate with, if you don’t have a single way of managing these two worlds you cannot keep up with the necessary speed, stability, and security.

Gardner: One of the things that impresses me about IT4IT is that any kind of organization can find value and use it from the get-go. As a start-up, an SMB, Jerrod, where you are seeing the value that IT4IT brings?

Solutions for any size business

Bennett
Bennett: SMBs have less pain, but proportionally it’s the same, exact problem. Larger enterprises have enormous pain, the midsize guys have medium pain, but it’s the same mess.

But the SMBs have an opportunity to get a lot more value because they can implement a lot more of this a lot faster. They can even rip up the foundation and start over, a greenfield approach. Most large organizations simply do not have that capability.

The same kind of change – like in big data, how much data is going to be created in the next five years versus the last five years? That’s universal, everyone is dealing with these problems.

Gardner: At the other end of the scale, Mark, big multinational corporations with sprawling IT departments and thousands of developers -- they need to rationalize, they need to limit the number of tools, find a fit-for-purpose approach. How does IT4IT help them?

Bodman: It helps to understand which areas to rationalize first, that’s important because you are not going to do everything at once. You are going to focus on your biggest pain points.

The other element is the legacy element. You can’t change everything at once. There are going to be bigger rocks, and then smaller rocks. Then there are areas where you will see folks innovate, especially when it comes to the DevOps, new languages, and new platforms that you deploy new capabilities on.

What IT4IT allows is for you to increasingly interchange those parts. A big value proposition of IT4IT is standardizing those components and the interfaces. Afterward, you can change out one component without disrupting the entire value chain.

Gardner: Rob, complexity is inherent in IT. They have a lot on their plate. How does the IT4IT Reference Architecture help them manage complexity?

Reference architecture connects everything

Akershoek
Akershoek: You are right, there is growing complexity. We have more services to manage, more changes and releases, and more IT data. That’s why it’s essential in any sized IT organization to structure and standardize how you manage IT in a broader perspective. It’s like creating a bigger picture.

Most organizations have multiple teams working on different tools and components in a whole value chain. I may have specialized people for security, monitoring, the service desk, development, for risk and compliance, and for portfolio management. They tend to optimize their own silo with their own practices. That’s what IT4IT can help you with -- creating a bigger picture. Everything should be connected.

Esler: I have used IT4IT to help get rid of those very same kinds of silos. I did it via a workshop format. I took the reference architecture from IT4IT and I got a certain number of people -- and I was very specific about the people I wanted -- in the room. In doing this kind of thing, you have to have the right people in the room.

We had people for service management, security, infrastructure, and networking -- just a whole broad range across IT. We placed them around the table, and I took them through the IT4IT Reference Architecture. As I described each of the words, which meant function, they began to talk among themselves, to say, “Yes, I had a piece of that. I had this piece of this other thing. You have a piece of that, and this piece of this.”

It started them thinking about the larger functions, that there are groups performing not just the individual pieces, like service management or infrastructure.
Gardner: IT4IT then is not muscling out other aspects of IT, such as Information Technology Infrastructure Library (ITIL), The Open Group Architecture Framework (TOGAF), and SAFe. Is there a harmonizing opportunity here? How does IT4IT fit into a larger context among these other powerful tools, approaches, and methodologies?

Rossen: That’s an excellent question, especially given that a lot of people into SAFe might say they don’t need IT4IT, that SAFe is solving their whole problem. But once you get to discuss it, you see that SAFe doesn’t give you any recommendation about how tools need to be connected to create the automated pipeline that SAFe relies on. So IT4IT actually compliments SAFe very well. And that’s the same story again and again with the other ones.

The IT4IT framework can help bring those two things – ITIL and SAFe -- together without changing the IT organizations using them. ITIL can still be relevant for the helpdesk, et cetera, and SAFe can still function -- and they can collaborate better.

Gardner: Varun, another important aspect to maturity and capability for IT organizations is to become more DevOps-oriented. How does DevOps benefit from IT4IT? What’s the relationship?

Go with the data flow

Vijaykumar: When we talk about DevOps, typically organizations focus on the entire service design lifecycle and how it moves into transition. But the relationship sometimes gets lost between how a service gets conceptualized to how it is translated into a design. We need to use IT4IT to establish traceability, to make sure that all the artifacts and all the information basically flows through the pipeline and across the IT value chain.

The way we position the IT4IT framework to organizations and customers is very important. A lot of times people ask me, “Is this going to replace ITIL?” Or, “How is it different from DevOps?”


The simplest way to answer those questions is to tell them that this is not something that provides a narrative guidance. It’s not a process framework, but rather an information framework. We are essentially prescribing the way data needs to flow across the entire IT value chain, and how information needs to get exchanged.

It defines how those integrations are established. And that is vital to having an effective DevOps framework because you are essentially relying on traceability to ensure that people receive the right information to accept services, and then support those services once they are designed.

Gardner: Let’s think about successful adoption, of where IT4IT is compelling to the overall business. Jerrod, among your customers where does IT4IT help them?

Holistic strategy benefits business

Bennett: I will give an example. I hate the word, but “synergy” is all over this. Breaking down silos and having all this stuff in one place -- or at least in one process, one information framework -- helps the larger processes get better.

The classic example is Agile development. Development runs in a silo, they sit in a black box generally, in another building somewhere. Their entire methodology of getting more efficient is simply to work faster.

So, they implement sprints, or Agile, or scrum, or you name it. And what you recognize is they didn’t have a resource problem, they had a throughput problem. The throughput problem can be slightly solved using some of these methodologies, by squeezing a little bit more out of their glides.

Credit: The Open Group

But what you find, really, is they are developing the wrong thing. They don’t have a strategic element to their businesses. They simply develop whatever the heck they decide is important. Only now they develop it really efficiently. But the output on the other side is still not very beneficial to the business.

If you input a little bit of strategy in front of that and get the business to decide what it is that they want you to develop – then all of a sudden your throughput goes through the roof. And that’s because you have broken down barriers and brought together the [major business elements], and it didn’t take a lot. A little bit of demand management with an approval process can make development 50 percent more efficient -- if you can simply get them working on what’s important.

It’s not enough to continue to stab at these small problems while no one has yet said, “Okay, timeout. There is a lot more to this information that we need.” You can take inspiration from the manufacturing crisis in the 1980s. Making an automobile engine conveyor line faster isn’t going to help if you are building the wrong engines or you can’t get the parts in. You have to view it holistically. Once you view it holistically, you can go back and make the assembly lines work faster. Do that and sky is the limit.

Gardner: SoIT4IT helps foster “simultaneous IT operations,” a nice and modern follow-on to simultaneous engineering innovations of the past.

Mark, you use IT4IT internally at ServiceNow. How does IT4IT help ServiceNow be a better IT services company?

IT to create and consume products

Bodman: A lot of the activities at ServiceNow are for creating the IT Service Management (ITSM) products that we sell on the market, but we also consume them. As a product manager, a lot of my job is interfacing with other product managers, dealing with integration points, and having data discussions.

As we make the product better, we automatically make our IT organization better because we are consuming it. Our customer is our IT shop, and we deploy our products to manage our products. It’s a very nice, natural, and recursive relationship. As the company gets better at product management, we can get more products out there. And that’s the goal for many IT shops. You are not creating IT for IT’s sake, you are creating IT to provide products to your customers.

Gardner: Rob, at Fruition Partners, a DXE company, you have many clients that use IT4IT. Do you have a use case that demonstrates how powerful it can be?

Akershoek: Yes, I have a good example of an insurance organization where they have been forced to reduce significantly the cost to develop and maintain IT services.

Initially, they said, “Oh, we are going to automate and monitor DevOps.” When I showed them IT4IT they said, “Well, we are already doing that.” And I said, “Why don’t you have the results yet? And they said, “Well, we are working on it, come back in three months.”

IT4IT saved time and created transparency. With that outcome they realized, "Oh, we would have never been able to achieve that if had continued the way we did it in the past."
But after that period of time, they still were not succeeding with speed. We said, “Use IT4IT, take it to specific application teams, and then move to cloud, in this case, Azure Cloud. Show that you can do it end-to-end from strategy into an operation, end-to-end in three months’ time and demonstrate that it works.”

And that’s what has been done, it saved time and created transparency. With that outcome they realized, “Oh, we would have never been able to achieve that if we had continued the way we did it in the past.”

Gardner: John, at HPE Pointnext, you are involved with digital transformation, the highest order of strategic endeavors and among the most important for companies nowadays. When you are trying to transform an organization – to become more digital, data-driven, intelligent, and responsive -- how does IT4IT help?

Esler: When companies do big, strategic things to try and become a digital enterprise, they implement a lot of tools to help. That includes automation and orchestration tools to make things go faster and get more services out.

But they forget about the operating model underneath it all and they don’t see the value. A big drug company I worked with was expecting a 30 percent cost reduction after implementing such tools, and they didn’t get it. And they were scratching their heads, asking, “Why?”

We went in and used IT4IT as a foundation to help them understand where they needed change. In addition to using some tools that HPE has, that helped them to understand -- across different domains, depending on the level of service they want to provide to their customers -- what they needed to change. They were able to learn what that kind of organization looks like when it’s all said and done.

Gardner: Lars, Micro Focus has 4,000 to 5,000 developers and needs to put software out in a timely fashion. How has IT4IT helped you internally to become a better development organization?

Streamlining increases productivity

Rossen: We used what is by now a standard technique in IT4IT, to do rationalization. Over a year, we managed to convert it all into a single tool chain that 80 percent of the developers are on.

With that we are now much more agile in delivering products to market. We can do much more sharing. So instead of taking a year, we can do the same easily every three months. But we also have hot fixes and a change focus. We probably have 20 releases a day. And on top of that, we can do a lot more sharing on components. We can align much more to a common strategy around how all our products are being developed and delivered to our customers. It’s been a massive change.

Gardner: Before we close out, I’d like to think about the future. We have established that IT4IT has backward compatibility, that if you are a legacy-oriented IT department, the reference architecture for IT management can be very powerful for alignment to newer services development and use.

But there are so many new things coming on, such as AIOps, AI, machine learning (ML), and data-driven and analytics-driven business applications. We are also finding increased hybrid cloud and multi-cloud complexity across deployment models. And better managing total costs to best operate across such a hybrid IT environment is also very important.

So, let’s take a pause and say, “Okay, how does IT4IT operate as a powerful influence two to three years from now?” Is IT4IT something that provides future-proofing benefits?

The future belongs to IT4IT

Bennett: Nothing is future-proof, but I would argue that we really needed IT4IT 20 years ago -- and we didn’t have it. And we are now in a pretty big mess.

There is nothing magical here. It’s been well thought-out and well-written, but there is nothing new in there. IT4IT is how it ought to have been for a while and it took a group of people to get together and sit down and architect it out, end-to-end.

Theoretically it could have been done in the 1980s and it would still be relevant because they were doing the same thing. There isn’t anything new in IT, there are lots of new-fangled toys. But that’s all just minutia. The foundation hasn’t changed. I would argue that in 2040 IT4IT will still be relevant.
Gardner: Varun, do you feel that organizations that adopt IT4IT are in a better position to grow, adapt, and implement newer technologies and approaches?

Vijaykumar: Yes, definitely, because IT4IT – although it caters to the traditional IT operating models -- also introduces a lot of new concepts that were not in existence earlier. You should look at some of the concepts like service brokering, catalog aggregation, and bringing in the role of a service integrator. All of these are things that may have been in existence, but there was no real structure around them.

IT4IT provides a consolidated framework for us to embrace all of these capabilities and to drive improvements in the industry. Coupled with advances in computing -- where everything gets delivered on the fly – and where end users and consumers expect a lot more out of IT, I think IT4IT helps in that direction as well.

Gardner: Lars, looking to the future, how do you think IT4IT will be appreciated by a highly data-driven organization?

Rossen: Well, IT4IT was a data architecture to begin with. So, in that sense it was the first time that IT itself got a data architecture that was generic. Hopefully that gives it a long future.

I also like to think about it as being like roads we are building. We now have the roads to do whatever we want. Eventually you stop caring about it, it’s just there. I hope that 20 years from now nobody will be discussing this, they will just be doing it.

The data model advantage

Gardner: Another important aspect to running a well-greased IT organization -- despite the complexity and growing responsibility -- is to be better organized and to better understand yourself. That means having better data models about IT. Do you think that IT4IT-oriented shops have an advantage when it comes to better data models about IT?

Bodman: Yes, absolutely. One of the things we just produced within the [IT4IT reference architecture data model] is a reporting capability for key performance indicators (KPI) guidance. We are now able to show what kinds of KPIs you can get from the data model -- and be very prescriptive about it.

In the past there had been different camps and different ways of measuring and doing things. Of course, it’s hard to benchmark yourself comprehensively that way, so it’s really important to have consistency there in a way that allows you to really improve.

In the past there had been different camps and different ways of measuring and doing things. It's hard to benchmark yourself that way. It's really important to have consistency in a way that allows you to really improve.
The second part -- and this is something new in IT4IT that is fundamental -- is the value stream has a “request to fulfill (R2F)” capability. It’s now possible to have a top-line, self-service way to engage with IT in a way that’s in a catalog and that is easy to consume and focused on a specific experience. That’s an element that has been missing. It may have been out there in pockets, but now it’s baked in. It’s just fabric, taught in schools, and you just basically implement it.

Rossen: The new R2F capability allows an IT organization to transform, from being a cost center that does what people ask, to becoming a service provider and eventually a service broker, which is where you really want to be.

Esler: I started in this industry in the mainframe days. The concept of shared services was prevalent, so time-sharing, right? It’s the same thing. It hasn’t really changed. It’s evolved and going through different changes, but the advent of the PC in the 1980s didn’t change the model that much.

Now with hyperconvergence, it’s moving back to that mainframe-like thing where you define a machine by software. You can define a data center by software.
Gardner: For those listening and reading and who are intrigued by IT4IT and would like to learn more, where can they go and find out more about where the rubber meets the IT road?

Akershoek: The best way is going to The Open Group website. There’s a lot of information on the reference architecture itself, case studies, and video materials.

How to get started is typically you can do that very small. Look at the materials, try to understand how you currently operate your IT organization, and plot it to the reference architecture.

That provides an immediate sense of what you may be missing, are duplicating areas, or have too much going on without governance. You can begin to create a picture of your IT organization. That’s the first step to try to create or co-create with your own organization a bigger picture and decide where you want to go next.

Gardner: I’m afraid we will have to leave it there. You have been listening to a sponsored BriefingsDirect discussion on how the IT4IT[tm] Reference Architecture for IT management creates demonstrated business benefits – in many ways across many types of organizations. And we’ve learned a variety of ways that IT4IT defines, sources, and manages services across the IT function’s value stream to its stakeholders.

So please join me in thanking our panelists:
  • Lars Rossen, Fellow at Micro Focus, in Copenhagen;
  • Mark Bodman, Senior Product Manager at ServiceNow, in Austin;
  • John Esler, Client Principal at Hewlett Packard Enterprise Pointnext, in Denver;
  • Rob Akershoek, IT Architect at Fruition Partners, a DXC Technology Company, in Amsterdam;
  • Varun Vijaykumar, Associate General Manager and ITSM Architect at HCL Technologies, in Raleigh-Durham, and
  • Jerrod Bennett, CEO and Co-Founder at Dreamtsoft, in San Diego.
And a big thank you as well to our audience for joining this BriefingsDirect modern digital business innovation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout the series of BriefingsDirect discussions sponsored by The Open Group.

Thanks again for listening. Please pass this on to your IT community and do come back next time.


Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: The Open Group.

Transcript of a panel discussion on how the IT4IT Reference Architecture for IT management works in many ways for many types of organizations and the demonstrated business benefits that are being realized as a result. Copyright Interarbor Solutions, LLC and The Open Group, 2005-2019. All rights reserved.

You may also be interested in: