Monday, December 02, 2019

How the ArchiMate Modeling Standard Helps Enterprise Architects Deliver Greater Business Agility and Successful Digital Transformation

Transcript of a discussion on how companies and governments can better produce rapid innovation and manage complexity across their IT and business operations.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Our next business trends discussion explores how the latest update to the ArchiMate® standard helps Enterprise Architects (EAs) make complex organizations more agile and productive.

Gardner
Joining me is Marc Lankhorst, Managing Consultant and Chief Technology Evangelist at BiZZdesign in The Netherlands, and he also leads the development team within the ArchiMate Forum at The Open Group. Welcome, Marc.

Marc Lankhorst: Thank you.

Gardner: There are many big changes happening within IT, business, and the confluence of both. We are talking about Agile processes, lean development, DevOps, the ways that organizations are addressing rapidly changing business environments and requirements.


Companies today want to transform digitally to improve their business outcomes. How does Enterprise Architecture (EA) as a practice and specifically the ArchiMate standard support being more agile and lean?

Lankhorst
Lankhorst: The key role of enterprise architecture in that context is to control and reduce complexity, because complexity is the enemy of change. If everything is connected to everything else, it’s too difficult to make any changes, because of all of the moving parts.

And one of the key tools is to have models of your architecture to create insights into how things are connected so you know what happens if you change something. You can design where you want to go by making something that is easier to change from your current state.

It’s a misunderstanding that if you have Agile development processes like Scrum or SAFe then eventually your company will also become an agile organization. It’s not enough. It’s important, but if you have an agile process and you are still pouring concrete, the end result will still be inflexibility.

Stay flexible, move with the times

So the key role of architecture is to ensure that you have flexibility in the short-term and in the long-term. Models are a great help in that. And that’s of course where the ArchiMate standard comes in. It lets you create models in standardized ways, where everybody understands them in the same way. It lets you analyze your architecture across many aspects, including identifying complexity bottlenecks, cost issues, and risks from outdated technology -- or any other kind of analysis you want to make.

Enterprise architecture is the key discipline in this new world of digital transformation and business agility. Although the discipline has to change to move with the times, it’s still very important to make sure that your organization is adaptive, can change with the times, and doesn’t get stuck in an overly complex, legacy world.
Find Out More About
The Open Group ArchiMate Forum
Gardner: Of course, Enterprise Architecture is always learning and improving, and so the ArchiMate standard is advancing, too. So please summarize for me the improvements in the new release of ArchiMate, version 3.1.

Lankhorst: The most obvious new addition to the standard is the concept of a value stream, that’s the latest new concept or new standard. That’s inspired by business architecture, and those of you who follow things like TOGAF®, a standard of The Open Group, or the BIZBOK will know this that value streams are a key concept in there, next to things like capabilities. ArchiMate didn’t yet have a value stream concept. Now it does, and it plays the same role as the value stream does for the TOGAF framework.

It lets you express how a company produces its value and what the stages in the value production are. So that helps describe how an organization realizes its business outcomes. That’s the most visible addition.

http://www.opengroup.org/
Next to that, there are some other changes, minor things, such as you can have a directed association relationship instead of only an undirected one. That can come in very handy in all kinds of modeling situations. And there are some technical improvements to various definitions; they have been clarified. The specification of the metamodel has been improved.

One technical improvement specifically of interest to ArchiMate specialists is the way in which we deal with so-called derived relationships. A derived relationship is basically the conclusion you can draw from a whole chain of things connected together. You might want to see what’s actually the end-to-end connection between things on that chain so there are rules on that. We have changed, improved, and formalized these rules. That allows, at a technical level, some extra capabilities in the language.

And that’s really for the specialists. I would say the first two things, the value stream concept and this directed association -- those are the most visible for most end users.

Overall value of the value stream 

Gardner: It’s important to understand how value streams now are being applied holistically. We have seen them, of course, in the frameworks -- and now with ArchiMate. Value streams provide a common denominator for organizations to interpret and then act. That often cuts across different business units. Help us understand why value streams as a common denominator are so powerful.

Lankhorst: Value stream helps express the value that an organization produces for its stakeholders, the outcomes it produces, and the different stages needed to produce that value. It provides a concept that’s less detailed than looking at your individual business processes.
Value stream helps express the value that an organization produces for its stakeholders, the outcomes it produces, and the different stages needed to produce that value. It provides a concept that's less detailed than looking at your individual business processes.

If you look at the process level, you might be standing too closely in front of the picture. You don’t see the overall perspective of how a company creates value for its customers. You only see the individual tasks that you perform, but how that actually adds value for your stakeholders -- that’s really the key.

The capability concept and the mapping between them is also very important. That allows you see what capabilities are needed for the stages in the value production. And in that way, you have a great starting point for the rest of the development of your architecture. It tells you what you need to be able to do in order to add value in these different stages.

You can use that at a relatively high level, an economic perspective, where you look at classical value chains from, say, a supplier via internal production to marketing and sales and to the consumer. You can also use that at a fine-grade level. But the focus is really always about the value you create -- rather than the tasks you perform.

Gardner: For those who might not be familiar with ArchiMate, can you provide us with a brief history? It was first used in The Netherlands in 2004 and it’s been part of The Open Group since 2008. How far back is your connection with ArchiMate?

Lankhorst: Yes, it started as a research and development project in The Netherlands. At that time, I worked at an applied research institute in IT. We did joint collaborative projects with industry and academia. In the case of ArchiMate, there was a project in which we had, for example, a large bank and a pension fund and the Dutch tax administration. A number of these large organizations needed a common way of describing architectures.
That began in 2002. I was the project manager of that project until 2004. Already during the projects the participating companies said, “We need this. We need a description technique for architecture. We also want you to make this a standard.” And we promised to make it into a standard. We needed a separate organization for that.

So we were in touch with The Open Group in 2004 to 2005. It took a while, but eventually The Open Group adopted the standard, and the official version under the aegis of The Open Group came out in 2008, version 1. We had a number of iterations: in 2012, version 2.0, and in 2016, version 3.0. Now, we are at version 3.1.

Gardner: The vision for ArchiMate is to be a de facto modeling notation standard for Enterprise Architecture that helps improve communication between different stakeholders across an organization, a company, or even a country or a public agency. How do the new ArchiMate improvements help advance this vision, in your opinion?
The value streams concept gives a broader perspective of how value is produced -- even across an ecosystem of organizations. This broad perspective is important.

Lankhorst: The value streams concept gives a broader perspective of how value is produced -- even across an ecosystem of organizations. That’s broader than just a single company or a single government agency. This broad perspective is important. Of course it works internally for organizations, it has worked like that, but increasingly we see this broader perspective.

Just to name two examples of that. The North Atlantic Treaty Organization (NATO) in its most recent NATO Architecture Framework version 4 came out early last year, now specify ArchiMate as one of the two allowed metamodels for specifically modeling architecture for NATO.


For these different countries and how they work together, this is one of the allowed standards. For example, the British Ministry of Defence wants to use ArchiMate models and the ArchiMate Exchange format to communicate with industry. For example, when seek a request for proposal (RFP), they use ArchiMate models for describing the context of that and then require industry to provide ArchiMate models to describe their solution.

Another example is in the European System of Central Banks. They have joint systems for doing transactions between central banks. They have completely modeled those out in ArchiMate. So, all of these different central banks have the same understanding of the architecture, across, between, and within organizations. Even within organizations you can have the same problems of understanding what’s actually happening, how the bits fit together, and make sure everybody is on the same page.

A manifesto to control complexity 

Gardner: It’s very impressive, the extent to which ArchiMate is now being used and applied. One of the things that’s also been impressive is that the goal of ArchiMate to corral complexity hasn’t fallen into the trap of becoming too complex itself. One of its goals was to remain as small as possible, not to cover every single scenario.

How do you manage not to become too complex? How has that worked for ArchiMate?

Lankhorst: One of the key principles behind the language is that we want to keep it as small and simple as possible. When we drew up our own ArchiMate manifesto -- some might know of the Agile manifesto – and the ArchiMate manifesto is somewhat similar.

One of the key principles is that we want to cover 80 percent of cases for the 80 percent of the common users, rather than try to cover a 100 percent for a 100 percent of the users. That would give you exotic use cases that require very specific features in the language that hardly anybody uses. It can clutter the picture for all the users. It would be much more complicated.
Find Out More About
The Open Group ArchiMate Forum
So, we have been vigilant to avoid that feature-creep, where we keep adding and adding all sorts of things to the language. We want to keep it as simple as possible. Of course, if you are in a complex world, you can’t always keep it completely straightforward. You have to be able to address that complexity. But keeping the language as easy to use and as easy to understand as possible has and will remain the goal.

Gardner: The Open Group has been adamant about having executable standards as a key principle, not too abstract but highly applicable. How is the ArchiMate standard supporting this principle of being executable and applicable?

Lankhorst: In two major ways. First, because it is implemented by most major architecture tools in the market. If you look at the Gartner Magic Quadrant and the EA tools in there, pretty much all of them have an implementation of the ArchiMate language. It is just the standard for EA.

In that sense, it becomes the one standard that rules them all in the architecture field. At a more detailed level, the executable standards, the ArchiMate Exchange format has played an important role. It makes it possible to exchange models between different tools for different applications. I mentioned the example of the UK Ministry of Defence which wants to exchange models with industry, specify their requirements, and get back specifications and solutions using ArchiMate models. It’s really important to make these kinds of models and this kind of information available in ways that the different tools can use, manipulate, and analyze.

Gardner: That’s ArchiMate 3.1. When did that become available?

Lankhorst: The first week of November 2019.

Gardner: What are the next steps? What does the future hold? Where do you take ArchiMate next?

Lankhorst: We haven’t made any concrete plans yet for possible improvements. But some things you can think about is simplifying the language further so that it is even easier to use, perhaps having a simplified notation for certain use cases so you don’t need the precision of the current notation. Maybe having an alternative notation that looks easier to the eye.

There are some other things that we might want to look at. For example, ArchiMate currently assumes that you already have a fair idea about what kind of solution you are developing. But maybe it’s moving an upstream to the brainstorming phase of architecture. So supporting the initial stages of design. That might be something we want to look into.

There are various potential directions but it’s our aim to keep things simple and help architects express what they want to do -- but not make the language overly complicated and more difficult to learn.
So simplicity, communication, and maybe expanding a bit toward early-stage design. Those are the ideas that I currently have. Of course, there is a community, the ArchiMate Forum within The Open Group. All of the members have a say. There are other outside influences as well, with various ideas of where we could take this.

Gardner: It’s also important to note that the certification program around ArchiMate is very active. How can people learn more about certification in ArchiMate?

Certification basics 

Lankhorst: You can find more details on The Open Group website, it’s all laid out there. Basically, there are two levels of certification and you can take the exams for that.  You can take courses with various course providers, BiZZdesign being one of them, and then prepare for the exam.

http://www.opengroup.org/
Increasingly, I see in practice of this is the requirements when architects are hired, that they are certified so that the company that hires, say consultants, knows that at least they know the basics. So, I would certainly recommend taking an exam if you are into Enterprise Architecture.

Gardner: And of course there are also the events around the world. These topics come up and are often very uniformly and extensively dealt with at The Open Group events, so people should look for those at the website as well.

I’m afraid we’ll have to leave it there. You have been listening to a sponsored BriefingsDirect discussion on how the latest update to the ArchiMate standard helps Enterprise Architects make complex organizations more agile and productive.


Please join me in thanking our guest, Marc Lankhorst, Managing Consultant and Chief Technology Evangelist at BiZZdesign in The Netherlands. Thank you so much, Marc.

Lankhorst: You’re welcome.

Gardner: And a big thank you as well to our audience for joining this BriefingsDirect agile business innovation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of BriefingsDirect discussions sponsored by The Open Group.

Thanks again for listening, please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: The Open Group.

Transcript of a discussion on how companies and governments can better produce rapid innovation and manage complexity across their IT and business operations. Copyright Interarbor Solutions, LLC and The Open Group, 2005-2019. All rights reserved.

You may also be interested in:

Friday, November 08, 2019

The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Data Center-as-a-Service

https://www.vertiv.com/en-us/services-catalog/maintenance-services/remote-services/life-services/

A discussion on how intelligent data center designs and components are delivering what amounts to data centers-as-a-service to SMBs, enterprises, and public sector agencies.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Vertiv.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into data center strategies.

Gardner
There has never been a better time to build an efficient, protected, powerful, contained, and modular data center -- yet many enterprises and public sector agencies cling to aging, vulnerable, and chaotic legacy IT infrastructure.

Stay with us now as we examine how automation, self-healing, and increasingly intelligent data center designs and components are delivering what amounts to data centers-as-a-service.

Here to help us learn more about a modern data center strategy that extends to the computing edge -- and beyond -- is Steve Lalla, Executive Vice President of Global Services at Vertiv. Welcome, Steve.


Steve Lalla: Thank you, Dana.

Gardner: Steve, when we look at the evolution of data center infrastructure, monitoring, and management software and services, they have come a long way. What’s driving the need for change now? What’s making new technology more pressing and needed than ever?

Lalla
Lalla: There are a number of trends taking place. The first is the products we are building and the capabilities of those products. They are getting smarter. They are getting more enabled. Moore’s Law continues. What we are able to do with our individual products is improving as we progress as an industry.

The other piece that’s very interesting is it’s not only how the individual products are improving, but how we connect those products together. The connective tissue of the ecosystem and how those products increasingly operate as a subsystem is helping us deliver differentiated capabilities and differentiated performance.

So, data center infrastructure products are becoming smarter and they are becoming more interconnected.

Interconnectivity across ecosystems 

The second piece that’s incredibly important is broader network connectivity -- whether it’s wide area connectivity or local area connectivity. Over time, all of these products need to be more connected, both inside and outside of the ecosystem. That connectivity is going to enable new services and new capabilities that don’t exist today. Connectivity is a second important element.

Third, data is exploding. As these products get smarter, work more holistically together, and are more connected, they provide manufacturers and customers more access to data. That data allows us to move from a break/fix type of environment into a predictive environment. It’s going to allow us to offer more just-in-time and proactive service versus reactive and timed-based services.

And when we look at the ecosystems themselves, we know that over time these centralized data centers -- whether they be enterprise data centers, colocation data centers, or cloud data centers -- are going to be more edge-based and module-based data centers.

And as that occurs, all the things we talked about -- smarter products, more connectivity, data and data enablement -- are going to be more important as those modular data centers become increasingly populated in a distributed way. To manage them, to service them, is going to be increasingly difficult and more important.
A lot of the folks who interact with these products and services will face what I call knowledge thinning. The talent is reaching retirement age and there is a high demand for their skills.

And one final cultural piece is happening. A lot of the folks who interact with these products and services will face what I call knowledge thinning. The highly trained professionals -- especially on the power side of our ecosystem -- that talent is reaching retirement age and there is a high demand for their skills. As data center growth continues to be robust, that knowledge thinning needs to be offset with what I talked about earlier.

So there are a lot of really interesting trends under way right now that impact the industry and are things that we at Vertiv are looking to respond to.

Gardner: Steve, these things when they come together form, in my thinking, a whole greater than the sum of the parts. When you put this together -- the intelligence, efficiency, more automation, the culture of skills -- how does that lead to the notion of data center-as-a-service?

Lalla: As with all things, Dana, one size does not fit all. I’m always cautious about generalizing because our customer base is so diverse. But there is no question that in areas where customers would like us to be operating their products and their equipment instead of doing it themselves, data center-as-a-service reduces the challenges with knowledge thinning and reduces the issue of optimizing products. We have our eyes on all those products on their behalf.

And so, through the connectivity of the product data and the data lakes we are building, we are better at predicting what should be done. Increasingly, our customers can partner with us to deliver a better performing data center.

Gardner: It seems quite compelling. Modernizing data centers means a lot of return on investment (ROI), of doing more with less, and becoming more predictive about understanding requirements and then fulfilling them.

Why are people still stuck? What holds organizations back? I know it will vary from site to site, but why the inertia? Why don’t people run to improve their data centers seeing as they are so integral to every business? 

Adoption takes time

Lalla: Well, these are big, complex pieces of equipment. They are not the kind of equipment that every year you decide to change. One of the key factors that affects the rate at which connectivity, technology, processing capability, and data liberation capability gets adopted is predicated by the speed at which customers are able to change out the equipment that they currently have in their data centers.

Now, I think that we, as a manufacturer, have a responsibility to do what we can to improve those products over time and make new technology solutions backward compatible. That can be through updating communication cards, building adjunct solutions like we do with Liebert® ICOMTM-S and gateways, and figuring out how to take equipment that is going to be there for 15 or 20 years and make it as productive and as modern as you can, given that it’s going to be there for so long.

So number one, the duration of product in the environment is certainly one of the headwinds, if you will.

https://www.vertiv.com/en-us/products-catalog/thermal-management/thermal-control-and-monitoring/liebert-icom-s-thermal-system-supervisory-control2/

Another is the concept of connectivity. And again, different customers have different comfort levels with connectivity inside and outside of the firewall. Clearly the more connected we can be with the equipment, the more we can update the equipment and assess its performance. Importantly, we can assess that performance against a big data lake of other products operating in an ecosystem. So, I think connectivity, and having the right solutions to provide for great connectivity, is important.

And there are cultural elements to our business in that, “Hey, if it works, why change it, right?” If it’s performing the way you need it to perform and it’s delivering on the power and cooling needs of the business, why make a change? Again, it’s our responsibility to work with our customers to help them best understand that when new technology gets added -- when new cards get added and when new assistants, l call them digital assistants, get added -- that that technology will have a differential effect on the business.

So I think there is a bit of reality that gets in the way of that sometimes.

Gardner: I suppose it’s imperative for organizations like Vertiv to help organizations move over that hump to get to the higher-level solutions and overcome the obstacles because there are significant payoffs. It also sets them up to be much more able to adapt to the future when it comes to edge computing, which you mentioned, and also being a data-driven organization.

How is Vertiv differentiating yourselves in the industry? How does combining services and products amount to a solution approach that helps organizations modernize.

Three steps that make a difference

Lalla: I think we have a differentiated perspective on this. When we think about service, and we think about technology and product, we don’t think about them as separate. We think about them altogether. My responsibility is to combine those software and service ecosystems into something more efficient that helps our customers have more uptime, and it becomes more predictive versus break/fix to just-in-time-types of services.
We spend quite a bit of time impacting the roadmaps and putting requirements into the product teams so that they have a better understanding of what we can do once data and information are liberated.

And the way we do that is through three steps. Number one, we have to continue to work closely with our product teams to ensure early in the product definition cycle which products need to be interconnected into an as-a-service or a self-service ecosystem.

We spend quite a bit of time impacting the roadmaps and putting requirements into the product teams so that they have a better understanding of what, in fact, we can do once data and information gets liberated. A great strategy always starts with great product, and that’s core to our solution.

The next step is a clear understanding that some of our customers want to service equipment themselves. But many of our customers want us to do that for them, whether it’s physically servicing equipment or monitoring and managing the equipment remotely, such as with our LIFETM management solution.

We are increasingly looking at that as a continuum. Where does self-service end, and where do delivered services begin? In the past it’s been relatively different in what we do -- from a self-service and delivered service perspective. But increasingly, you see those being blended together because customers want a seamless handover. When they discover something needs to be done, we at Vertiv can pick up from there and perform that service.

So the connective tissue between self-service and Vertiv-delivered service is something that we are increasing clarity on.

And then finally, we talked about this earlier, we are being very active at building a data lake that comes from all the ecosystems I just talked about. We have billions of rows of normalized data in our data lake to benefit our customers as we speak.

Gardner: Steve, when you service a data center at that solution-level through an ecosystem of players, it reminds me of when IT organizations started to manage their personal computers (PCs) remotely. They didn’t have to be on-site. You could bring the best minds and the best solutions to bear on a problem regardless of where the problem was -- and regardless of where the expertise was. Is that what we are seeing at the data center level?

Self-awareness remotely and in-person

Lalla: Let’s be super clear, to upgrade the software on an uninterruptible power supply (UPS) is a lot harder than to upgrade software on a PC. But the analogy of understanding what must be done in-person and what can be done remotely is a good one. And you are correct. Over years and years of improvement in the IT ecosystems, we went from a very much in-person type of experience, fixing PCs, to one where very much like mobile phones, they are self-aware and self-healing.

This is why I talked about the connectivity imperative earlier, because if they are not connected then they are not aware. And if they are not aware, they don’t know what they need to do. And so connectivity is a super important trend. It will allow us to do more things remotely versus always having to do things in-person, which will reduce the amount of interference we, as a provider of services, have on our customers. It will allow them to have better uptime, better ongoing performance, and even over time allow tuning of their equipment.
You could argue the mobile phone and PC are at very late stages of their journey of automation. We are on the very early stages of it, and smarter products, connectivity, and data are all important factors.

We are at the early stages of that journey. You could argue the mobile phone and the PC guys are at the very late stages of their journey of automation. We are in the very early stages of it, but the things we talked around earlier -- smarter products, connectivity, and data -- all are important factors influencing that.

Gardner: Another evolution in all of this is that there is more standardization, even at the data center level. We saw standardization as a necessary step at the server and storage level -- when things became too chaotic, too complex. We saw standardization as a result of virtualization as well. Is there a standardization taking place within the ecosystem and at that infrastructure foundation of data centers?

Standards and special sauce

Lalla: There has been a level of standardization in what I call the self-service layer, with protocols like BACnet, Modbus, and SNMP. Those at least allow a monitoring system to ingest information and data from a variety of diverse devices for minimally being able to monitor how that equipment is performing.

I don’t disagree that there is an opportunity for even more standardization, because that will make that whole self-service, delivered-as-a-service ecosystem more efficient. But what we see in that control plane is really Vertiv’s unique special sauce. We are able to do things between our products with solutions – like Liebert ICOM-S -- that allow our thermal products to work better together than if they were operating independently.

https://www.vertiv.com/en-us/services-catalog/maintenance-services/remote-services/life-services/

You are going to see an evolution of continued innovation in peer-to-peer networking in the control plane that probably will not be open and standard. But it will provide advances in how our products work together. You will see in that self-service, as-a-service, and delivered-service plane continued support for open standards and protocols so that we can manage more than just our own equipment. Then our customers can manage and monitor more of their own equipment.

And this special sauce, which includes the data lakes and algorithms -- a lot of intellectual property and capital in building those algorithms and those outcomes -- help customers operate better. We will probably stay close to the vest in the short term, and then we’ll see where it goes over time.

Gardner: You earlier mentioned moving data centers to the edge. We are hearing an awful lot architecturally about the rationale for not moving the edge data to the cloud or the data center, but instead moving the computational capabilities right out to the edge where that data is. The edge is where the data streams in, in massive quantities, and needs to be analyzed in real-time. That used to be the domain of the operational technology (OT) people.

As we think about data centers moving out to the edge, it seems like there’s a bit of an encroachment or even a cultural clash between the IT way of doing things and the OT way of doing things. How does Vertiv fit into that, and how does making data center-as-a-service help bring the OT and IT together -- to create a whole greater than the sum of the parts?

OT and IT better together 

Lalla: I think maybe there was a clash. But with modular data centers and things like SmartAisle and SmartRow that we do today, they could be fully contained, standalone systems. Increasingly, we are working with strategic IT partners on understanding how that ecosystem has to work as a complete solution -- not with power and cooling separate from IT performance, but how can we take the best of the OT world power and cooling and the best of the IT world and combine that with things like alarms and fire suppression. We can build a remote management and monitoring solution that can be outsourced if you wanted, to consume it as a service, or in-sourced if you want to do it yourself.


And there’s a lot of work to do in that space. As an industry, we are in the early stages, but I don’t think it’s hard to foresee a modular data center that should operate holistically as opposed to just the sum of its parts.

Gardner: I was thinking that the OT-IT thing was just an issue at the edge. But it sounds like you’re also referring to it within the data center itself. So flesh that out a bit. How do OT and IT together -- managing all the IT systems, components, complexity, infrastructure, support elements -- work in the intelligent, data center-as-a-service approach?

Lalla: There is the data center infrastructure management (DCIM) approach, which says, “Let’s bring it all together and manage it.” I think that’s one way of thinking about OT and IT, and certainly Vertiv has solutions in that space with products like TrellisTM.

But I actually think about it as: Once the data is liberated, how do we take the best of computing solutions, data analytics solutions, and stuff that was born in other industries and apply that to how we think about managing, monitoring, and servicing all of the equipment in our industrial OT space?

It’s not necessarily that OT and IT are one thing, but how do we apply the best of all of technology solutions? Things like security. There is a lot of great stuff that’s emerged for security. How do we take a security-solutions perspective in the IT space if we are going to get more connected in the OT space? Well, let’s learn from what’s going on in IT and see how we can apply it to OT.
Once the data is liberated we can take the best of data analytics solutions born in other industries and apply that to how we think about managing, monitoring, and servicing all of the equipment in the industrial OT space.

Just because DCIM has been tackled for years doesn’t mean we can’t take more of the best of each world and see how you can put those together to provide a solution that’s differentiated.

I go back to the Liebert ICOM-S solution, which uses desktop computing and gateway technology, and application development running on a high-performance IT piece of gear, connected to OT gear to get those products that normally would work separately to actually work more seamlessly together. That provides better performance and efficiency than if those products operated separately.

Liebert ICOM-S is a great example of where we have taken the best of the IT world compute technology connectivity and the best of the OT world power and cooling and built a solution that makes the interaction differentiated in the marketplace.

Gardner: I’m glad you raised an example because we have been talking at an abstract level of solutions. Do you have any other use cases or concrete examples where your concept for infrastructure data center-as-a-service brings benefits? When the rubber hits the road, what do you get? Are there some use cases that illustrate that? 

Real LIFE solutions

Lalla: I don’t have to point much further than our Vertiv LIFE Services remote monitoring solution. This solution came out a couple years ago, partly from our Chloride® Group acquisition many years ago. LIFE Services allows customers to subscribe to have us do the remote monitoring, remote management, and analytics of what’s happening -- and whenever possible do the preventative care of their networks.

And so, LIFE is a great example of a solution with connectivity, with the right data flowing from the products, and with the right IT gear so our personnel take the workload away from the customer and allow us to deliver a solution. That’s one example of where we are delivering as-a-service for our customers.

https://www.vertiv.com/en-us/services-catalog/maintenance-services/remote-services/life-services/
We are also working with customers -- and we can’t expose who they are -- to bring their data into our large data lake so we can help them better predict how various elements of their ecosystem will perform. This helps them better understand when they need just-in-time service and maintenance versus break/fix service and maintenance.

These are two different examples where Vertiv provides services back to our customers. One is running a network operations center (NOC) on their behalf. Another uses the data lake that we’ve assimilated from billions of records to help customers who want to predict things and use the broad knowledge set to do that.

Gardner: We began our conversation with all the great things going on in modern data center infrastructure and solutions to overcome obstacles to get there, but economics plays a big role, too. It’s always important to be able to go to the top echelon of your company and say, “Here is the math, here’s why we think doing data center modernization is worth the investment.”

What is there about creating that data lake, the intellectual property, and the insights that help with data center economics? What’s the total cost of ownership (TCO) impact? How do you know when you’re doing this right, in terms of dollars and cents?

Uptime is money

Lalla: It’s difficult to generalize too much but let me give you some metrics we care about. Stuff is going to break, but if we know when it’s going to break -- or even if it does break -- we can understand exactly what happened. Then we can have a much higher first-time fix rate. What does that mean? That means I don’t have to come out twice, I don’t have to take the system out of commission more than once, and we can have better uptime. So that’s one.

Number two, by getting the data we can understand what’s going on with the network time-to-repair and how long it takes us from when we get on-site to when we can fix something. Certainly it’s better if you do it the first time, and it’s also better if you know exactly what you need when you’re there to perform the service exactly the way it needs to be done. Then you can get in and out with minimal disruption.

A third one that’s important -- and one that I think will grow in importance -- is we’re beginning to measure what we call service avoidance. The way we measure service avoidance is we call up a customer and say, “Hey, you know, based on all this information, based on these predictions, based on what we see from your network or your systems, we think these four things need to be addressed in the next 30 days. If not, our data tells us that we will be coming out there to fix something that broken as opposed to fixing it before it breaks.” So service avoidance or service simplification is another area that we’re looking at.

There are many more -- I mean, meeting service level agreements (SLAs), uptime, and all of those -- but when it comes to the tactical benefits of having smarter products, of being more connected, liberating data, and consuming that data and using it to make better decisions as a service -- those are the things that customers should expect differently.

Gardner: And in order to enjoy those economic benefits through the Vertiv approach and through data center-as-a-service, does this scale down and up? It certainly makes sense for the larger data center installations, but what about a small- to medium-sized business (SMB)? What about a remote office, or a closet and a couple of racks? Does that make sense, too? Do the economic and the productivity benefits scale down as well scale up?

Lalla: Actually when we look at our data, more customers who don’t have all the expertise to manage and monitor their single-phase or small three-phase or Liebert CRV [cooling] units, and they don’t have the skill set -- those are the customers that really appreciate what we can do to help them. It doesn’t mean that they don’t appreciate it as you go up the stack, because as you go up the stack what those customers appreciate isn’t the fact that they can do some of the services themselves. They may be more of a self-service-oriented customer, but what they increasingly are interested in is how we’re using data in our data lake to better predict things that they can’t predict by just looking at their own stuff.

https://www.vertiv.com/
So, the value shifts depending on where you are in the stack of complexity, maturity, and competency. It also varies based on hyperscale, colocation, enterprise, small enterprise, and point-of-sale. There are a number of variables so that’s why it’s difficult to generalize. But this is why the themes of productivity, smarter products, edge ecosystems, and data liberation are common across all those segments. How they apply the value that’s extracted in each segment can be slightly different.

Gardner: Suffice it to say data center-as-a-service is highly customizable to whatever organization you are and wherever you are on that value chain.

Lalla: That’s absolutely right. Not everybody needs everything. Self-service is on one side and as-a-service is on the other. But it’s not a binary conversation.

Customers who want to do most of the stuff themselves with technology, they may need only a little information or help from Vertiv. Customers who want most of their stuff to be managed by us -- whether it’s storage systems or large systems -- we have the capability of providing that as well. This is a continuum, not an either-or.

Gardner: Steve, before we close out, let’s take a look to the future. As you build data lakes and get more data, machine learning (ML) and artificial intelligence (AI) are right around the corner. They allow you to have better prediction capabilities, do things that you just simply couldn’t have ever done in the past.

So what happens as these products get smarter, as we are collecting and analyzing that data with more powerful tools? What do you expect in the next several years when it comes to the smarter data center-as-a-service?

Circle of knowledge gets smart 

Lalla: We are in the early stages, but it’s a great question, Dana. There are two outcomes that will benefit all of us. One, that information with the right algorithms, analysis, and information is going to allow us to build products that are increasingly smarter.

There is a circle of knowledge. Products produce information going to the data lake, we run the right algorithms, look for the right pieces of information, feed that back into our products, and continually evolve the capability of our products as time goes on. Those products will break less, need less service, and are more reliable. We should just expect that, just as you have seen in other industries. So that’s number one.

Number two, my hope and belief are that we move from a break-fix mentality or environment of where we wait for something to show up on a screen as an alarm or an alert. We move from that to being highly predictive and just-in-time.

As an industry -- and certainly at Vertiv -- first-time fix, service avoidance, and time for repair are all going to get much better, which means one simple thing for our customers. They are going to have more efficient and well-tuned data centers. They are going to be able to operate with higher rates of uptime. All of those things are going to result in goodness for them -- and for us.

Gardner: I’m afraid we’ll have to leave it there. We have been exploring how automation, self-healing, and increasingly intelligent data center designs are delivering what amounts to data centers-as-a-service. And we’ve learned how modern data center strategies will extend to the computing edge and beyond.

So please join me in thanking our guest, Steve Lalla, Executive Vice-President of Global Services at Vertiv. Thank you so much, Steve.


Lalla: Thanks, Dana.

Gardner: And a big thank you as well to our audience for joining us for this sponsored BriefingsDirect data center strategies interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Vertiv-sponsored discussions.

Thanks again for listening. Please pass this along to your community and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Vertiv.

A discussion on how intelligent data center designs and components are delivering what amounts to data centers-as-a-service to SMBs, enterprises, and public sector agencies. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in: