Tuesday, June 14, 2016

How IT4IT Helps Turn IT into a Transformational Service for Digital Business Innovation

Transcript of a discussion on the business benefits of transforming IT organizations into agents of change for businesses.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hello, and welcome to a special BriefingsDirect thought leadership panel discussion coming to you in conjunction with The Open Group 2016 San Francisco event in January. We'll now delve into the business benefits of transforming IT organizations into agents of change for businesses.

Gardner
The Open Group IT4IT initiative, a new reference architecture for managing IT as a business, grew out of a need at some of the world's biggest organizations to make their IT departments more responsive, more agile. We’ll learn now how those IT departments within an enterprise and the vendors that support them have reshaped themselves, and how others can follow their lead.

And so to learn more about how IT4IT fosters change, we're joined by Michael Fulton, Principal Architect at CC&C Solutions; Philippe Geneste, a Partner at Accenture; Sue Desiderio, a Director at PriceWaterhouseCoopers; Dwight David, Enterprise Architect at Hewlett Packard Enterprise (HPE); and Rob Akershoek, Solution Architect IT4IT at Shell IT International.

So let me start with our panel and just go down the line. Philippe, tell me where you think the most progress has been made in terms of making IT4IT a mature reference architecture.

Philippe Geneste: The two innovations that we have in the IT4IT Reference Architecture -- the Service Backbone and the Request to Fulfill (R2F) value stream -- are the two greatest novelties of the reference architecture.

Are they mature? They're mature enough, and they'll probably evolve in their level of maturity. There are a number of areas that are maturing, and some that we have in design. The IT Financial Management, for instance, is one that I'm working on, and the service costing within that, which I think we'll get a chance to get ready by version 2.1. The idea is to have it as guidance in version 2.1.

The value streams by themselves are also mature and almost complete. There are a number of improvements we can make to all of them, but I think overall the reference architecture is usable today as an architecture to start with. It's not quite for vendor certification, although that’s upcoming, but there are a number of good things and a number of implementations that would benefit from using the current IT4IT Reference Architecture 2.0.

Gardner: Sue, where do you see the most traction and growth, and what would you like to see improved?

Desiderio
Sue Desiderio: I agree with Philippe’s statements. Also picking up on what Lars said earlier, it's an easy entry point to start with Detect to Correct, which is often where we see it, because it’s one of the value streams that’s a little bit more known and understood. So that’s an easier point of entry for the whole IT4IT Value Chain, compared to some of the other value streams.

The service model, as we've stated all along, is definitely the backbone to the whole IT value chain. Although it's well-formed and in a good, mature state, there's still plenty of work to do to make that consumable to the IT organizations to understand all the different phases of the life cycle and all the different data objects that make up the Service Backbone. That's something that we're currently working on for the 2.1 version, so that we have better examples. We can show how it applies in a real IT organization, and it’s not just what’s in the documentation today.

More detail

Rob Akershoek: I don’t think it’s about positive and negative in this case, but more about areas that we need to work on in more detail, like defining the service-broker role that you see in the new IT organization [and] how you interface with your external service providers. We've identified a number of areas where the IT organization has key touch points with these vendors, like your service catalog, you need to synchronize catalog information with the external vendors and aggregate it into your own catalog.

Akershoek
But there's also the fulfillment API -- how do you communicate a request to your suppliers or different technology stacks and get the consumption and cost data back in? I think we define that today in the IT4IT standard, but we need to go to a lower level of detail -- how do we actually integrate with vendors and our service providers?

So interfacing with the vendors in the eco-system sits on many different levels. It’s on the catalog level and the request fulfillment, that you actually do provision, the cost consumption data, and those kind of aspects.

Another topic is still the linking in to security and identity and access management. It's an area where we still need to clarify. We need to clarify how all the subscriptions in a service link in to that access management capability, which is part of the subscription and, of course, the fulfillment. We didn’t identify it as a separate functional component.

Gardner: Dwight, where are you most optimistic and where would you put more emphasis?

Dwight David: I'll start with the latter. More emphasis needs to be on our approach to Detect to Correct. Oftentimes, I see people thinking about Detect to Correct as in the traditional mode of being reactive, as opposed to understanding that this model can be applied even to the new changing user-friendly type of economy and within the hybrid type of IT. A change in thinking in the application of the value streams would also help us.

David
Many of us have a lot of gray hairs, including myself, and we revert to the old way of thinking, as opposed to the way we should be moving forward. That’s the area where we can do the most.

What's really good, though, is that a lot of people understand Detect to Correct. So it’s an easy adoption in terms of understanding the Reference Architecture. It’s a good entry point to the IT4IT Reference Architecture. That’s where I see the actual benefit. I would encourage us to make it useful, use it, and try it. The most benefit happens then.

Gardner: And Michael, room for optimism and room for improvement?


Management Guide

Michael Fulton: I want to build on Dwight’s point around trying it by sharing. The one thing I'm most excited about, particularly this week, is the Management Guide -- very specifically, chapter 5 of the Management Guide. I hope all of you got a chance to grab your copy of that. If you haven’t, I recommend downloading it from The Open Group website. That chapter is absolutely rich in content about how to actually implement IT4IT.

Fulton
And I tip my hat to Rob, who did a great piece of work, along with several other people. If you want to pick up the standard and use it, start there, start with chapter 5 of the Management Guide. You may not need to go much further, because that’s just great content to work with. I'm very excited about that.

From the standpoint of where we need to continue to evolve and grow as a standard, we've referenced some of the individual pieces, but at a higher level. The supporting activities in general all still need to evolve and get to the level of detail that we have with the value streams. That’s a key area for me.

The next area that I would highlight, and I know we're actively starting work on this, is around getting down to that level of detail where we can do data interoperability, where we can start to outline the specifics that are needed to define APIs between the functional components in such a way that we can ultimately bring us back to that Open Group vision of a boundaryless information flow.

Gardner: How do we bridge the divide between a cloud provider, or a series of providers, and have IT take on a brokering role within the organization? As the broker, they're going to be held responsible for the performance, regardless of where those services originate and how they interoperate or not.

What do we see as needed in order to make that boundarylessness extend to this idea of a brokered IT organization, a hybrid organization, but still able to produce a common approach to support quality of service across IT in that particular organization? How do we get to that hybrid vision, Philippe?

Geneste: We'll get there step-by-step. There's a practical step that’s implementable today. My suggestion would be that every customer or company that selects an outsourcer, that selects a cloud vendor, that selects a product, uses the IT4IT Reference Architecture in the request for proposal (RFP), putting a strong emphasis on the integration.

We see a lot of RFPs that are still silo-based -- which one is the best product for project and portfolio management, which one is the best service management tool -- but it’s not very frequent that we see the integration as being the topnotch value measured in the RFP. That would be one point.

The discussions with the vendors, again, cloud vendors or outsourcers or consulting firms should start from this, use it as an integration architecture, and tell us how you would do things based on these standardized concepts. That’s a practical step that can be used or employed today.

In a second step, when we go further into the vendor specification, there are vendors today, when you analyze the products and the cloud offerings that are closer to the concepts we have in the reference architecture. They're maybe not certified, maybe not the same terminology, but the concepts are there, or the way to the concepts is closer.

And then ultimately, step 3 and 3.5 will be product vendor certified, cloud service offering certified, hopefully full integration according to the reference architecture, and eventually, even plug-and-play. We're doing a little bit about plug-and-play, but at least integration.

Gardner: What sort of time frame would you put on those steps? Is this a two-year process, a four-year process, to soon to tell?

Achievable goals

Geneste: That’s a tough one. I suppose the vendor should be responding to this one. For the service providers, for the cloud service providers, it’s a little bit trickier, but for the consulting firm for the service providers it should be what it takes to get the workforce trained and to get the concepts spread inside the organization. So within six to 12 months, the critical mass should be there in these organizations. It's tough, but project by project, customer by customer it’s achievable.

Some vendors are on the way, and we've seen several vendors talk about IT4IT in this conference. I know that those have significant efforts on the way and are preparing for vendor certification. It will be probably a multiyear process to get the full suite of products certified, because there is quite a lot to change in the underlying software, but progressively, we should get there.

So, it's having first levels of certification within one to two years, possibly even sooner. I would be interested in knowing what the vendor responses will be.

Gardner: Sue, along the same lines, what do you see needed in order to make the IT department able to exercise the responsibility of delivering IT across multiple players and multiple boundaries?
Have those conversations upfront in the contract conversations, so that everyone is aware of what we're trying to accomplish and that we're trying to seek that seamless integration between those suppliers and us.

Desiderio: Again, it’s starting with the awareness and the open communication about IT4IT and, on a specific instance, where that fits in. Depending on the services we're getting from vendors, or whether it's even internal services that we are getting, where do they fit into the whole IT4IT framework, what functions are we getting, what are the key components, and where are our interface points?

Have those conversations upfront in the contract conversations, so that everyone is aware of what we're trying to accomplish and that we're trying to seek that seamless integration between those suppliers and us.

Gardner: Rob, this would appear to be a buyer’s market in terms of their ability to exercise some influence. If they go seeking RFPs, if there are fewer cloud providers than there were general vendors in a traditional IT environment, they should be able to dictate this, don’t you think?

Akershoek: In the cloud world, the consumer would not dictate at all. That’s the traditional way that we dictate how an operator should provide us data. That’s the problem with the cloud. We want to consume a standard service. So we can't tell the cloud vendor, send me your cost data in this format. That won't work, because we don’t want the cloud vendor to make something proprietary for us.

That’s the first challenge. The cloud vendors are out there and we don’t want to dictate; we want to consume a standard service. So if they set up a catalog in their way, we have to adopt that. If they do the billing their way, we have to adopt it or select another cloud vendor. That’s the only option you have, select another vendor or adopt the management practices of the cloud vendor. Otherwise, we will continuously have to update it according to our policy. That’s a key challenge.

That’s why managing your cloud vendor is really about the entire value chain. You start with making your portfolio, thinking about what cloud services you put in your offerings, or your portfolio. So for past platforms, we use vendor A, and for infrastructure and service, vendor B. That’s where it starts. Which vendors do I engage with?

And then, going down to the Request to Fulfill, it’s more like what are the products that we're allowed to order and how do we provision those? Unfortunately, the cloud vendors don’t have IT4IT yet, meaning we have to do some work. Let’s say we want to provision the cloud environment. We make sure that all the cloud resources we provision are linked to that subscription, linked to that service, so at least we know the components that a cloud vendor is managing, where it belongs, and which service is consuming that.

Different expectations

Fulton: Rob has a key point here around the expectations being different around cloud vendors, and that’s why IT4IT is actually so powerful. A cloud vendor is not going to customize their interfaces for every single individual company, but we can hold cloud vendors accountable to an open industry standard like IT4IT, if we have detailed out the right levels of interoperability.

To me, the way this thing comes together long term is through this open standard, and then through that RFP process, customer organizations holding their vendors accountable to delivering inside that open standard. In the world of cloud, that’s actually to the benefit of the cloud providers as well.

Akershoek: That’s a key point you make there, indeed.

David: And just to piggyback on what we're saying, it goes back to the value proposition. Why am I doing this? If we have something that’s an open standard, it enables velocity. You can identify costs much easier. It’s simpler and it goes back again to the value proposition and showing these cloud vendors that because of a standard, I'm able to consume more of your services, I'm able to consume your services easier, and here I'm guaranteed because it’s a standard to get my value. Again, it's back to the value proposition that the open standard offers.

Gardner: We've heard that this standard makes more sense, at least now, for large organizations than say startups or small and medium-sized businesses (SMBs), and that automation is essential and that manual processes are a bug. How do you react to that? Is that wisdom and truth for you that automation is king and that this is designed for large, complex organizations?

Geneste: It will take more work for a large organization to deploy the full reference architecture than it will be for a smaller organization. Those organizations that today use a lot of DevOps or Agile practices that are close to the concepts that we have -- Service Catalog, Service Backbone -- should be able to get it straight very quickly, including with vendor products and things that aren’t necessarily fully certified, but at least for which the concepts are quite close to what we are recommending here.
I view it as two different challenges, but not necessarily inappropriate for the smaller organizations.  . It’s a set of very good practices that are implementable and the description is in the books that have just been published.

So, the large organizations will have a different set of challenges, which is that there is a massive legacy that we need to transform along the concepts. So it’s possible. We heard recommendations that start with B2C or do it one way or another, but the smaller organizations can get it right by simplifying some of the implementations, getting the right tools, and moving faster onto new tools.

I view it as two different challenges, but not necessarily inappropriate for the smaller organizations. It’s a set of very good practices that are implementable and the description is in the books that have just been published.

The ones for which I see an extra challenge perhaps, or at least that I see today in the industry, are the big outsourcers -- those long-term contracts, data-center outsourcing, five years network, and so forth. Companies that use those should plan before their contract renegotiation, exactly what I said for the RFPs for vendor products.

When you're planning a year ahead, two years ahead, even three years ahead, to renegotiate your outsourcing contracts, introduce the reference architecture and use it as a benchmark for who you're going to contract with. If you know that you're going to re-negotiate with the same vendor, try to influence them enough that you can have these concepts and these architectures in place.

Gardner: Sue, how about this issue of automation? Is it essential to be largely automated to realize the full benefits of IT4IT or is that more of a nice-to-have goal? What's the relationship between a high degree of automation in your IT organization for the support of these activities and the standard and Reference Architecture?

Automation is key

Desiderio: I'm a believer that automation is key, so we definitely have to get automation throughout the whole end-to-end value chain no matter what. That’s really part of the whole transformation going into this new model.

You see that throughout the whole value chain. We talked about it individually on the different value streams and how it comes back.

I also want to touch on what’s the right size company or firm to pick up IT4IT. I agree with where Philippe was coming from. Smaller shops can pick it up and start leveraging it more quickly, because they don't have that legacy IT that was done, where it's not built on composite services and things. Everything on a system is pinpointing direct servers and direct networks, instead of building it on services, like a hosting service and a monitoring response service.

For larger IT organizations, there's a lot more change, but it's critical for us to survive and be viable in the future for those IT shops, the larger ones in large organizations, to start adopting and moving forward.
We, in a larger IT shop, are going to be running in a mixed mode for a long time to come. So, it's looking at where to start seeing that business value as you look at new initiatives and things within your organization.

It's not a big bang. We, in a larger IT shop, are going to be running in a mixed mode for a long time to come. It's looking at where to start seeing that business value as you look at new initiatives and things within your organization. How do you start moving into the new model with the new things? How do you start transitioning your legacy systems and whatnot into more of the new way of thinking and looking at that consumption model and what we're trying to do, which is focus on that business outcome.

So it's much harder for the larger IT shops, but the concepts apply to all sizes.

Gardner: Rob, the subject of the moment is size and automation.

Akershoek: I think the principle we just discussed, automation, is a good principle, but if you look at the legacy, as you mentioned, you're not going to automate your legacy, unless you have a good business case for that. You need to standardize your services on many different layers, and that's what you see in the cloud.

Cloud vendors are standardizing extremely, defining standard component services. You have to do the same and define your standard services and then automate all of those. The legacy ones you can't automate or probably don’t want to automate.

So it's more standardization, more standard configurations, and then you can automate and develop or Detect to Correct as well, if you have a very complex configuration and it changes all the time without any standards.

The size of the organization doesn’t matter. Both for large and smaller organizations you need to adopt standard cloud practices from the vendors and automate the delivery to make things repeatable.

Desire to grow

David: Small organizations don’t want to remain small all the time; they actually want to grow. Growth starts with a mindset, a thinking mindset. By applying the Reference Architecture, even though you don't apply every single point to my one-man or two-man shop, it then helps me, it positions me, and it gives me the frame of reference, the thinking to enable growth.

It grows organically. So, you don't end up with the legacy baggage that most of the large companies have. And small companies may get acquired, but at least they have good discipline or they may acquire others as they grow. The application of the IT4IT Reference Architecture is just not for large companies, it’s also for small companies, and I'm saying that as a small-business owner myself.

Akershoek: Can I add to that? If you're starting out deployed to the cloud, maybe the best way is to start with automation at first or at least design for automation. If you have a few thousand servers running in the cloud and you didn't start with that concept, then you already have legacy after a few years running in the cloud. So, you should start thinking about automation from the start, not with your legacy of course, but if you're now moving to the cloud design, build that immediately.
The entire Reference Architecture applies from day one for companies of any size; it's just a question of whether it's explicit or implicit.

Fulton: On this point, if you were with us yesterday, you might have participated in a maturity model conversation. If you were here this morning for Ryan's plenary speech, he referenced an emergence model. We've just started work within the forum on this topic. Potentially one of the directions we're heading is to figure out this very issue, what of the reference architecture applies at what size and evolution in a company’s growth.

As I mentioned, I think I made this comment earlier, the entire reference architecture applies from day one for companies of any size; it's just a question of whether it's explicit or implicit.

If it's implicit, it's in the head of the founder. You're still doing the elements, or you can be still doing the elements, of the reference architecture in your mind and your thought process, but there are pieces you need to make explicit even when you are, as Charlie likes to say, two people in a garage.

On the automation piece, the key thing that has been happening throughout our industry related to automation has been, at least in my perspective, when we've been automating within functional components. What the IT4IT Reference Architecture and its vision of value streams allow us to do is rethink automation along the lines of value streams, across functional components. That's where it starts to really add a considerable value, especially when we can start to put together interoperability between tooling on some of these things. That’s where we're going to see automation take us to that next level as IT organizations.

Gardner: As IT4IT matures and becomes adopted and serves both consumers and providers of services, it seems to me that there will be a similar track with digital business of how you run your business, which is going to be more a brokering activity at a business level, that a business is really a constituency of different providers across supply chains, increasingly across service providers.

Is there a dual track for IT4IT on the IT side and for business management of services through a portal, through dashboard, something that your business analyst and on up would be involved with? Should we let them happen separately? How can we make them more aligned and even highly integrated and synergistic?

Best practices

Geneste: We have such best practices in IT4IT that the businesses themselves can replicate that and use that for themselves. I suppose certain companies do that a little bit today; if you take the Ubers and the Airbnbs and have these disintermediation connecting with private individuals a lot of the time, but have some of these service-oriented concepts today effectively, even though they don’t use IT4IT.

Just as much as we see today, we have cases where businesses, for their help-desks or for their request management, turn to the likes of HPE for service-management software to help them with their business help-desk. We're likely to see that those best practices in terms of individualization and specification of individual conceptual service, service catalogue, or subscription mechanisms. You're right; the concepts could very easily apply to businesses. As to how that would turn out, I would need to do a little bit more thinking, but I think from a concept’s standpoint, it truly should be useful.

Desiderio: We're trying to move ourselves up the stack to help the businesses in the services that they're providing and so it’s very relevant as we're looking at IT4IT and how we're managing the IT services. It’s also those business services, it’s concurrent, it’s evolving and training and making the business aware of where we're trying to go and how they can leverage that in their own services that they are providing outward.
Where we start talking about transformation, we must be aligned with the business so we understand their business processes and the services that they're trying to serve.

When you look at adopting this, even when you go back down to your IT in your organization where you have your different typical organizational teams, there's a challenge for each IT team to look at the services they're providing and how they start looking at what they do in terms of services, instead of just the functions.

That goes all the way up the stack including the business, the business services, and IT’s job. When we start talking about transformation, we must be aligned with the business so we understand their business processes and the services that they're trying to serve and then how are we truly that business-enabler.

Akershoek: I interpret your question like it's about shadow IT, that there is no shadow IT. Some IT management activity is performed by the business, and you mentioned as well, the business needs to apply IT4IT practices as well. As soon as IT activities are done by the business, like they select and manage their own software-as-a-service (SaaS) application, they need to perform the IT4IT related activities themselves. They're even starting to configure SaaS services themselves. The business can do the configuration and they might even provide the end-user support. Also in these cases, these management activities fit in the IT4IT reference structure model as well.

Gardner: Dwight, we have a business scorecard, we have an IT scorecard, why shouldn’t they be the same scorecard?

David: I'm always reminded that IT is in place to help the business, right? The business is the function, and IT should be the visible enabler of business success. I would classify that as catching up to the business expectations. Could some of the principles that we apply in IT be used for the business? Yeah, it can be, but I see it more the other way around. If you look at a whole value chain that came from the business perspective being approached, being applied to IT, I still see that the business is driven, but really IT is becoming more seamless in enabling the business to achieve their particular goals.

Application of IT

Fulton: The whole concept of digital business is actually a complete misnomer. I hate it; I think it’s wrong. It’s all about the application of information technology. In the context of what we typically talk about with IT4IT, we're talking about the application of information technology to the management of the IT department.

We also talk about the application of information technology to the transformation of business processes. Most of the time, that happens inside companies, and we're using the principles of IT4IT to do that. When we talk about digital business, usually we're talking about the application of information technology into the transformation of business models of companies. Again, it’s still all about applying information technology to make the company work in a different way. For me, the IT4IT principles, the Reference Architecture, the value streams, will still hold for all of that.

Gardner: We have time for one or two questions from our audience ... .

Speaker: A comment was made that you can start with Detect to Correct as an entry point into the value chain, and then I also heard that you don’t have to implement all the functional components. Does that imply that you can do just some of the value streams and not all, and people can kind of pick and choose what they think will help their organization the most?
You don’t start with a specific value stream, because you focus too much on a single value stream. You still look at the overall picture first.

David: At a certain point, it can be Detect to Correct, but as Sue mentioned earlier, it’s where in your business is the pain point. Evaluate the entire value chain because all of the value streams map that to your business activity, identify where you have exactly one of your main pain points, and start there.

Certainly, if you go to Detect to Correct and maybe your shop doesn’t have a problem type of practice, there are certainly options that you can leave out, if that’s not a particular pain point for you. Again the size of the company and level of maturity will determine where you actually start and what you use. But what we do have in the Reference Architecture will help any size of company across the breadth of that particular organization to use and apply the architecture.

Akershoek: My opinion is that you don’t start with a specific value stream, because you focus too much on a single value stream. You still look at the overall picture first. So, even if you can optimize Detect to Correct, it doesn’t make sense if you don’t have requirements to deploy very well organized or you even don’t have your service portfolio management in order.

In that sense, you shouldn't try to optimize a value stream by itself. Most organizations have something in place in all value streams. If you don’t have any (or have a very immature) capability in Strategy to Portfolio, you probably should start with Strategy to Portfolio. Defining what services we start to offer. What investments are needed? So it's best to start there.

But of course, most often you are not in a green-field situation. So you hope that you have some portfolio management capability in place. If not, maybe you need to start there, because otherwise, you can't link your CIs to services; you have no concept of what a service is. So you look at the entire value chain and you select the things that you need to mature first.

Train your workforce

Geneste: One suggestion that we're testing at the moment with the global clients is starting with the past, starting with the platform-as-a-service (PaaS) thing. Train your IT workforce first. They need to understand all these concepts before they move it to the business. IT will source its own services, my development service, my test service and so on.

Once you have piloted that, move on to everything that you develop in all of those digital solutions or solutions for the digital which typically are newer based on these metaphors and have tools that are easier to make work along those concepts. Then, progressively, as these will not work in isolation, they will need to work with some of your legacy, some of the data you have in your existing data sources, and so on. You can bring those on, designing them as services, as standardized APIs and progressively bring in more and more business services in that fashion and try to take over the legacy of the IT progressively like this.
Train your IT workforce first. They need to understand all these concepts before they move it to the business.

Gardner: I am afraid we will have to leave it there. We’ve been talking about the business benefits of transforming IT organizations into agents of change for businesses.

And we’ve heard how The Open Group IT4IT initiative, a new reference architecture for managing IT as a business, grew out of a need at major IT vendors themselves to make their IT departments more responsive, more agile.

I’d like to thank our panelists, Michael Fulton, Principal Architect at CC&C Solutions; Philippe Geneste, a Partner at Accenture; Sue Desiderio, a Director at PriceWaterhouseCoopers; Dwight David, Enterprise Architect at HPE; and Rob Akershoek, Solution Architect IT4IT at Shell IT International.

Also, a big thank you to The Open Group for sponsoring this discussion. And lastly, a big thank you to our audience for joining us.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator throughout these Enterprise IT Thought Leadership panel discussions. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: The Open Group.

Transcript of a discussion on the business benefits of transforming IT organizations into agents of change for businesses. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Thursday, June 09, 2016

Alation Centralizes Enterprise Data Knowledge by Employing Machine Learning and Crowdsourcing

Transcript of a discussion on how Alation makes data actionable by keeping it up-to-date and accessible using innovative means.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation -- and how it’s making an impact on people’s lives.

Gardner
Our next big-data case study discussion focuses on the Tower of Babel problem for disparate data, and explores how Alation manages multiple data types by employing machine learning and crowdsourcing.

We'll explore how Alation makes data more actionable via such innovative means as combining human experts and technology systems.

To learn more about how enterprises and small companies alike can access more data for better analytics, please join me in welcoming Stephanie McReynolds, Vice-President of Marketing at Alation in Redwood City, California. Welcome.
Embed the HPE
Big Data
OEM Software
Stephanie McReynolds: Thank you, Dana. Glad to be here.

Gardner: I've heard of crowdsourcing for many things, and machine learning is more-and-more prominent with big-data activities, but I haven't necessarily seen them together. How did that come about? How do you, and why do you need to, employ both machine learning and experts in crowdsourcing?

McReynolds: Traditionally, we've looked at data as a technology problem. At least over the last 5-10 years, we’ve been pretty focused on new systems like Hadoop for storing and processing larger volumes of data at a lower cost than databases could traditionally support. But what we’ve overlooked in the focus on technology is the real challenge of how to help organizations use the data that they have to make decisions. If you look at what happens when organizations go to apply data, there's often a gap between the data we have available and what decision-makers are actually using to make their decisions.

McReynolds
There was a study that came out within the last couple of years that showed that about 56 percent of managers have data available to them, but they're not using it . So, there's a human gap there. Data is available, but managers aren't successfully applying data to business decisions, and that’s where real return on investment (ROI) always comes from. Storing the data, that’s just an insurance policy for future use.

The concept of crowdsourcing data, or tapping into experts around the data, gives us an opportunity to bring humans into the equation of establishing trust in data. Machine-learning techniques can be used to find patterns and clean the data. But to really trust data as a foundation for decision making human experts are needed to add business context and show how data can be used and applied to solving real business problems.

Gardner: Usually, when you're employing people like that, it can be expensive and doesn't scale very well. How do you manage the fit-for-purpose approach to crowdsourcing where you're doing a service for them in terms of getting the information that they need and you want to evaluate that sort of thing? How do you balance that?

Using human experts

McReynolds: The term "crowdsourcing" can be interpreted in many ways. The approach that we’ve taken at Alation is that machine learning actually provides a foundation for tapping into human experts.

We go out and look at all of the log data in an organization. In particular, what queries are being used to access data and databases or Hadoop file structures. That creates a foundation of knowledge so that the machine can learn to identify what data would be useful to catalog or to enrich with human experts in the organization. That's essentially a way to prioritize how to tap into the number of humans that you have available to help create context around that data.

That’s a great way to partner with machines, to use humans for what they're good for, which is establishing a lot of context and business perspective, and use machines for what they're good for, which is cataloging the raw bits and bytes and showing folks where to add value.

Gardner: What are some of the business trends that are driving your customers to seek you out to accomplish this? What's happening in their environments that requires this unique approach of the best of machine and crowdsourcing and experts?

McReynolds: There are two broader industry trends that have converged and created a space for a company like Alation. The first is just the immense volume and variety of data that we have in our organizations. If it weren’t the case that we're adding additional data storage systems into our enterprises, there wouldn't be a good groundwork laid for Alation, but I think more interestingly perhaps is a second trend and that is around self-service business intelligence (BI).

So as we're increasing the number of systems that we're using to store and access data, we're also putting more weight on typical business users to find value in that data and trying to make that as self-service a process as possible. That’s created this perfect storm for a system like Alation which helps catalog all the data in the organization and make it more accessible for humans to interpret in accurate ways.
So as we're increasing the number of systems that we're using to store and access data, we're also putting more weight on typical business users to find value in that data and trying to make that as self-service a process as possible.

Gardner: And we often hear in the big data space the need to scale up to massive amounts, but it appears that Alation is able to scale down. You can apply these benefits to quite small companies. How does that work when you're able to help a very small organization with some typical use cases in that size organization?

McReynolds: Even smaller organizations, or younger organizations, are beginning to drive their business based on data. Take an organization like Square, which is a great brand name in the financial services industry, but it’s not a huge organization in and of itself, or Inflection or Invoice2go, which are also Alation customers.

We have many customers that have data analyst teams that maybe start with five people or 20 people. We also have customers like eBay that have closer to a thousand analysts on staff. What Alation provides to both of those very different sizes of organizations is a centralized place, where all of the information around their data is stored and made accessible.

Even if you're only collaborating with three to five analysts, you need that ability to share your queries, to communicate on which queries addressed which business problems, which tables from your HPE Vertica database were appropriate for that, and maybe what Hive tables on your Hadoop implementation you could easily join to those Vertica tables. That type of conversation is just as relevant in a 5-person analytics team as it is in a 1000-person analytics team.

Gardner: Stephanie, if I understand it correctly, you have a fairly horizontal capability that could apply to almost any company and almost any industry. Is that fair, or is there more specialization or customization that you apply to make it more valuable, given the type of company or type of industry?

Generalized technology

McReynolds: The technology itself is a generalized technology. Our founders come from backgrounds at Google and Apple, companies that have developed very generalized computing platforms to address big problems. So the way the technology is structured is general.

The organizations that are going to get the most value out of an Alation implementation are those that are data-driven organizations that have made a strategic investment to use analytics to make business decisions and incorporate that in the strategic vision for the company.

So even if we're working with very small organizations, they are organizations that make data and the analysis of data a priority. Today, it’s not every organization out there. Not every mom-and-pop shop is going to have an Alation instance in their IT organization.

Gardner: Fair enough. Given those organizations that are data-driven, have a real benefit to gain by doing this well, they also, as I understand it, want to get as much data involved as possible, regardless of its repository, its type, the silo, the platform, and so forth. What is it that you've had to do to be able to satisfy that need for disparity and variety across these data types? What was the challenge for being able to get to all the types of data that you can then apply your value to?
Embed the HPE
Big Data
OEM Software
McReynolds: At Alation, we see the variety of data as a huge asset, rather than a challenge. If you're going to segment the customers in your organization, every event and every interaction with those customers becomes relevant to understanding who that individual is and how you might be able to personalize offerings, marketing campaigns, or product development to those individuals.

That does put some burden on our organization, as a technology organization, to be able to connect to lots of different types of databases, file structures, and places where data sits in an organization.

So we focus on being able to crawl those source systems, whether they're places where data is stored or whether they're BI applications that use that data to execute queries. A third important data source for us that may be a bit hidden in some organizations is all the human information that’s created, the metadata that’s often stored in Wiki pages, business glossaries, or other documents that describe the data that’s being stored in various locations.

We actually crawl all of those sources and provide an easy way for individuals to use that information on data within their daily interactions. Typically, our customers are analysts who are writing SQL queries. All of that context about how to use the data is surfaced to them automatically by Alation within their query-writing interface so that they can save anywhere from 20 percent to 50 percent of the time it takes them to write a new query during their day-to-day jobs.

Gardner: How is your solution architected? Do you take advantage of cloud when appropriate? Are you mostly on-premises, using your own data centers, some combination, and where might that head to in the future?

Agnostic system

McReynolds: We're a young company. We were founded about three years ago and we designed the system to be agnostic as to where you want to run Alation. We have customers who are running Alation in concert with Redshift in the public cloud. We have customers that are financial services organizations that have a lot of personally identifiable information (PII) data and privacy and security concerns, and they are typically running an on-premise Alation instance.

We architected the system to be able to operate in different environments and have an ability to catalog data that is both in the cloud and on-premise at the same time.

The way that we do that from an architectural perspective is that we don’t replicate or store data within Alation systems. We use metadata to point to the location of that data. For any analyst who's going to run a query from our recommendations, that query is getting pushed down to the source systems to run on-premise or on the cloud, wherever that data is stored.

Gardner: And how did HPE Vertica come to play in that architecture? Did it play a role in the ability to be agnostic as you describe it?
It gives the IT department insight. Day-to-day, Alation is typically more of a business person’s tool for interacting with data.

McReynolds: We use HP Vertica in one portion of our product that allows us to provide essentially BI on the BI that’s happening. Vertica is used as a fundamental component of our reporting capability called Alation Forensics that is used by IT teams to find out how queries are actually being run on data source systems, which backend database tables are being hit most often, and what that says about the organization and those physical systems.

It gives the IT department insight. Day-to-day, Alation is typically more of a business person’s tool for interacting with data.

Gardner: We've heard from HPE that they expect a lot more of that IT department specific ops efficiency role and use case to grow. Do you have any sense of what some of the benefits have been from your IT organization to get that sort of analysis? What's the ROI?

McReynolds: The benefits of an approach like Alation include getting insight into the behaviors of individuals in the organization. What we’ve seen at some of our larger customers is that they may have dedicated themselves to a data-governance program where they want to document every database and every table in their system, hundreds of millions of data elements.

Using the Alation system, they were able to identify within days the rank-order priority list of what they actually need to document, versus what they thought they had to document. The cost savings comes from taking a very data-driven realistic look at which projects are going to produce value to a majority of the business audience, and which projects maybe we could hold off on or spend our resources more wisely.

One team that we were working with found that about 80 percent of their tables hadn't been used by more than one person in the last two years. In that case, if only one or two people are using those systems, you don't really need to document those systems. That individual or those two individuals probably know what's there. Spend your time documenting the 10 percent of the system that everybody's using and that everyone is going to receive value from.

Where to go next

Gardner: Before we close out, any sense of where Alation could go next? Is there another use case or application for this combination of crowdsourcing and machine learning, tapping into all the disparate data that you can and information including the human and tribal knowledge? Where might you go next in terms of where this is applicable and useful?

McReynolds: If you look at what Alation is doing, it's very similar to what Google did for the Internet in terms of being available to catalog all of the webpages that were available to individuals and service them in meaningful ways. That's a huge vision for Alation, and we're just in the early part of that journey to be honest. We'll continue to move in that direction of being able to catalog data for an enterprise and make easily searchable, findable, and usable all of the information that is stored in that organization.

Gardner: Well, very good. I'm afraid we will have to leave it there. We've been examining how Alation maps across disparate data while employing machine learning and crowdsourcing to help centralize and identify data knowledge. And we've learned how Alation makes data actionable by keeping it up-to-date and accessible using innovative means.
Embed the HPE
Big Data
OEM Software
So a big thank you to our guest, Stephanie McReynolds, Vice-President of Marketing at Alation in Redwood City, California. Thank you so much, Stephanie.

McReynolds: Thank you. It was a pleasure to be here.

Gardner: And a big thank you as well to our audience for joining us for this big data innovation case study discussion.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a sponsored discussion on how Alation makes data actionable by keeping it up-to-date and accessible using innovative means. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Friday, June 03, 2016

Catbird CTO on Why New Security Models are Essential for Highly Virtualized Data Centers

Transcript of a BriefingsDirect discussion on how increased virtualization across data centers translates into the need for new approaches to security, compliance, and governance.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the Hewlett Packard Enterprise (HPE) Voice of the Customer interview series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT transformation and innovation -- and how that's making an impact on people's lives.

Gardner
Our next hybrid-computing case study discussion explores how increased virtualization across data centers translates into the need for new approaches to security, compliance, and governance. Just as next-generation data centers and private clouds are gaining traction, security threats are on the rise -- and attack techniques are becoming more sophisticated.

Are yesterday’s perimeter-based security infrastructure methods up to the task? Or are new approaches needed to gain policy-based control over all virtual assets at all times?

Here to explore the future of security for virtual workloads is Holland Barry, CTO at Catbird in Scotts Valley, California. Welcome, Holland.

Holland Barry: Thank you. Good to be here.
Learn How
Cloud Protection Starts
With a Security-First Mindset
Gardner: Tell us why it’s a different picture nowadays when we look at data centers and private clouds. Oftentimes, people think similarly about security -- just wrap a firewall around it and you're okay. Why isn’t that the case? What’s new?

Barry
Barry: As we've introduced many layers of abstraction into the data center, trying to adapt those physical appliances that don’t move around as fluid as the workloads they're protecting, it has become an issue. And as people virtualize more and we go more to this notion of a software-defined data center (SDDC), it has just proven a challenge to keep up, and we know that that layer on the perimeter is probably not sufficient anymore.

Gardner: It also strikes me that it’s a moving target, virtual workloads come and go. You want elasticity. You want to be able to have fit-for-purpose infrastructure, but that's also a challenge when you can’t keep track of things and therefore secure them. 

Barry: That’s absolutely right. The transient nature of workloads themselves make any type of rigid enforcement from a single device pretty tough to deal with. So you need something that was built to be fluid alongside those dynamic workloads.

Gardner: And I suppose, too, that enterprise architects that are putting more virtualization together across the data center, the SDDC, aren’t always culturally aligned with the security folks. So you have more than just a technology issue here. Tell us what Catbird does that goes beyond just the technology, and perhaps works toward a cultural and organizational benefit?

Greater skill set

Barry: Even just from an interface standpoint or trying to create a tool that can cater to those different administrative silos, you have people who have virtualization expertise, compute expertise, and then different security practice expertise. There are many slim lanes within that security category, and the next generation set of workloads in the hybrid IT environment is going to demand more of a skill set that can span all those domains. 

Gardner: We talk a lot about DevOps and SecOps combining. There's also this need for automation and orchestration. So policy-based seems to be really the only option to keep up with the speed on security. 

Barry: That’s exactly right. There has to be an application-centric approach to how you're applying security to your workloads. Ideally that would be something that could be templatized or defined up front. So as new workloads present themselves in the network, there's already a predetermined way that they're going to be secured and that security will take place right up against the edge of that workload.

Gardner: Holland, tell us about Catbird, what you do, how you're deployed, and how you go about solving some of these challenges.
Having that single point of policy definition and enforcement is going to be critical to people adopting and really taking the next leap to put a layer of defense in their data center.

Barry: Catbird was born and raised in virtualized environments. We've been around for a number of years. It was this notion of bringing the perimeter and the control landscape closer to the workload, and that’s via hypervisor integration and also via the virtual data-path integration. So it's having a couple of different vantage points from within the fabric and applying security with a purpose-built solution that can span multiple platforms.

So that hybrid IT environment, which is becoming a reality, may have a little bit of OpenStack, it may have a little bit of VMware. Having that single point of policy definition and enforcement is going to be critical to people adopting and really taking the next leap to put a layer of defense in their data center.

Gardner: How are you deployed, you are a software appliance yourself, virtualized software?

Barry: Exactly right. Our solutions are comprised of two components, and it’s a very basic hub-and-spoke architecture. We have a policy enforcement point, a virtual machine (VM) appliance that installs out on each hypervisor, and we have a management node that we call the Control Center. That’s another VM, and those two components talk together in a secure manner. 

Gardner: What’s a typical scenario? Where in this type of east-west traffic virtualization environment, security works better and how it protects? Are there some examples that would demonstrate where the perimeter approach breaks down would but your model got the task done?

Doing enforcement

Barry: I think that anytime that you need to have the granularity of not only visibility, but enforcement -- I'm going to get a little technical here -- down to the UUID of the vNIC, that smallest unit of measure as it relates to a workload, that’s really where we shine, because that’s where we do our enforcement. 

Gardner: Okay. How about partnerships? Obviously you're working in an environment where there are a lot of different technologies, lots of moving parts. What’s going on with you and HPE in terms of deployment, working with private cloud, operating systems, and then perhaps even moving toward modeling and some of the HPE ArcSight technology?

Barry: We have a number of different integration points inside HPE’s portfolio. We're a Helion-ready certified partner. We just announced our support for the 2.0 Helion OpenStack release.
Learn How
Cloud Protection Starts
With a Security-First Mindset
We're doing a lot of work the ArcSight team in terms of getting very detailed event feeds and visibility into the virtualized workloads.

And we just announced some work that we are doing with HPE’s HPN team around their software-defined networking (SDN) VAN Controller as well, extending Catbird’s east-west visibility into the physical domain, leveraging the placement of the SDN controller and its command over the switches. So it’s pretty exciting work there.

Gardner: Let’s dig into that a bit, the (SDN) advances that are going on and how that’s changing how people think about deployment and management of infrastructure and data centers. Doesn’t this really give you some significant boost in the way that you can engage with security, intercept and stop issues before they propagate? What is it about SDN that is good for security?
Knowing the state of the workload, is going to be critical to applying those traditional security controls.

Barry: As the edges of what has traditionally been rigid network boundaries become fluid as well, knowing the state of the network, knowing the state of the workload, is going to be critical to applying those traditional security controls. So we're really trying to tie all this together -- not only with our integration with Helion, but also utilizing the knowledge that the SDN Controller has of the data path. We can surface indications that compromise and maybe get you to a problem a little bit quicker than traditional methods.

Gardner: I always like to try to show and not just tell. Do you have any examples of organizations that are doing this, what it has done for them, and why it’s a path to even greater future benefits as they further virtualize and go to even larger hybrid environments?

Barry: Absolutely. I can’t name them by name, but one of the US’ largest carriers telcos is one of our customers. They came to us to solve a problem of that consistency of policy definition and enforcement across those hybrid platforms. So it’s amongst VMware and OpenStack workloads.

That's not only for the application of the security controls and not only for the visibility of the traffic, but also the evidence of assurance of compliance, being able to do mapping back to regulatory frameworks and things like that.

Agentless fashion

There are a couple of different use cases in there, but it’s really that notion where I can do it in an agentless fashion, and I think that’s an important thing to differentiate and point out about our solution. You don’t have to install an agent within the workload. We don’t require a presence inside the OS.

We're doing it just outside of the workload, at the hypervisor level. It’s key that we have the specific tailored integrations to the different hypervisor platforms, so we can abstract away the complexity of applying the security controls where you just have a single pane of glass. You define the security policy and it doesn’t matter which platform you're on, it’s going to be able to do it in that agentless fashion.
We're aware of it, and I think our method of doing the security control application is going to be the one that wins.

Gardner: Of course, the march of technology continues, and we're not just dealing with virtualization. We're now talking about containers, micro-services, composable infrastructure. How will your solution, in conjunction with HPE, adapt to that, and is there more of a role as you get closer to the edge, even out into the Internet of Things (IoT), where we're talking about all sorts of more discrete devices really extending the network in all directions?

Barry: As the workload types proliferate and we get fancier about how we virtualize, whether it’s using a container or a virtualization platform, and then the vast amount of IoT devices that are going to present themselves, we're working closely with the HPE team in lockstep as mass adoption of these technologies happens.

We have plans in place to solve platform by platform, and we believe taking an approach where we're looking at that specific problem and asking how we're going to attack this thing while keeping that bigger vision of, "We're still going to keep you in that same console and the method in which you apply the security is going to be the same."

Containers are a great example, something that we know we need to tackle, something that’s getting adopted in a fashion far more than I have ever seen with anything else. That’s a pretty exciting one. But at the end of the day, it’s a way of virtualizing a service or micro-services. We're aware of it, and I think our method of doing the security control application is going to be the one that wins.

Gardner: Pretty hard to secure a perimeter when there really isn’t a perimeter.

Barry: Perimeter is quickly fading, it seems.
Learn How
Cloud Protection Starts
With a Security-First Mindset
Gardner: OK, we'll have to leave it there. We've been exploring how increased virtualization across data centers translates into the need for new approaches to security, compliance, and governance. And we have seen how policy-based control over all virtual assets provides a greater protection and management for next-generation data centers. So a big thank you to our guest, Holland Barry, CTO at Catbird. Thank you, Holland

Barry: Pleasure to be here. Thank you.

Gardner: And a big thank you to our audience as well for joining us for this Hewlett Packard Enterprise Voice of the Customer interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a BriefingsDirect discussion on how increased virtualization across data centers translates into the need for new approaches to security, compliance, and governance. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Thursday, June 02, 2016

Why Business Apps Design Must Better Cater to Consumer Habits to Improve User Experience

Transcript of a discussion on how self-service and consumer habits are having an impact on user experience design for business applications.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: SAP Ariba

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Gardner
Our next technology innovation thought leadership discussion focuses on the new user experience demands for applications, and the impact that self-service and consumer habits are having on the new user experience design.

As more emphasis is placed on user experiences and the application of consumer-like processes in business-to-business (B2B) commerce, a softer side of software seems to be emerging. We'll now explore a new approach to design that emphasizes simple and intuitive process flows.

With that, please join me in welcoming our guest, Michele Sarko, Chief Design Officer at SAP Ariba. Welcome, Michele.

Michele Sarko: Thank you, Dana. Thank you for having me.

Gardner: There seems to be a hand-off between the skills that are new to apps' user interface design versus older skills that had a harder edge from technology-centric requirements. Are we seeing a shift in the way that software is designed, from that user-experience perspective, and how different is it from the past?

Sarko: It’s more about understanding the end users first. It’s more about empathy and universal design. What used to happen was that technology was so new that we as designers were challenging it do things it didn’t do before. Now, technology is the table stakes from which everything is measured, and designers -- and our users for that matter -- expect it to just work.

Sarko
The differentiator now is to bring the human element into enterprise products, and that’s why there's a shift happening in software. The softer side of this is happening because we're building these products more for the people who actually use them, and not just for the people who buy them.

Gardner: We've heard from some discussions at the SAP Ariba LIVE Conference recently about the need for greater and more rapid adoption and getting people more deeply into business networks and applications. It seems to me that this user experience and that adoption relationship are quite closely aligned.

Sarko: Yes, they absolutely are, because at the end of the day, it’s about people. When we're selling consumer software or enterprise software or any types of business software, if people don't use it or don’t want to use it, you're not going to have adoption. You don’t want it to become “shelfware,” so to speak. You want to make a good business investment, but you also want your end users to be able to do it effectively. That’s where adoption comes into play and why its key to our customers as well as our own business.

Intuitive approach

Gardner: Another thing we heard was that people don't read the how-to manuals and they don't watch the videos. They simply want to dive in and be able to work and proceed with apps. There needs to be an intuitive approach to it.

I'm old enough to remember that when new software arrived in the office, we would all get a week of training and we'd sit there for hours of training. But no more training these days. So how do people learn to use new software?

Sarko: First and foremost, we need to build it intuitively, so that you naturally apply the patterns that you have to that software, but we should come about it in a different way, where training is in context, in product.

We're doing new things with overlays. and to take users through a tour, or step them through a new feature, to give them just the quick highlights of where things are. You see this sort of thing in mobile apps all the time after you install an update. In addition to that, we build in-context questions or answers right there at the point of need, where the user is likely to encounter something new or initially unknown in the the product.

So it’s just-in-time and in little snippets. But underpinning all of it, the experience has to be very, very simple, so that you don't have to go through this overarching hurdle to understand it.
We can keep those two things separate, making us able to iterate a lot faster. That's enabling us to go quicker and to understand users’ needs.

Gardner: I suppose, too, that there's an enterprise architectural change afoot. Before, when we had packaged software, the cycles for changing that would be sometimes years, if not more. Nowadays, when we go to cloud and software-as-a-service (SaaS) applications, where there’s multitenancy, and where the developer, the supplier of the software, can change things very rapidly, a whole new opportunity opens up. How does this new cloud architecture model benefit the user experience, as compared to the architecture of packaged software?

Sarko: The software and the capabilities that we're using now are definitely a step forward. With SAP Ariba, we’ve been able to decouple the application in the presentation layer in such a way that we can change the user experience more rapidly, do A/B testing, do a lot of in-product metrics and tracking, and still keep all of the deep underpinnings and the safety and security right there.

So we don't have to spend all of our time building it deep into the underpinnings. We can keep those two things separate, making us able to iterate a lot faster. That's enabling us to go quicker and to understand users’ needs.

Gardner: The drive to include mobile devices with any software and services now plays a larger role. We saw some really interesting demos at the SAP Ariba LIVE conference around the ability to discover and onboard a vendor using a mobile device, in this case a smartphone. How is the drive for mobile-first impacting this?

Sarko: Well, the mobile-first mindset is something that we always employ now. This is the way that we should, and do, design a lot of things, because it puts on a different set of restraints, form factors, and simplicity. On mobile, you only have so much real estate with which to work. Approaching it from that mindset allows us to take the learning that we do on mobile and bring them back to all the other device options that we have.

Design philosophy

Gardner: Tell me a little bit about your philosophy about design. When you look at software that maybe has years of a legacy, the logic has been there for quite some time, but you want to get this early adoption, rapid adoption. You want a mobile-first mentality. How do you approach this from a design philosophy point of view?

Sarko: It has to be somewhat pragmatic, because you can't move the behemoth of the company that you are to something different. The way that I approach it, and that we’re looking at within SAP Ariba, is to consider new ways to improve and new innovations and start there, with the mobile-first mindset, or really by just redesigning aspects of the product.

At the same time, pick the most important aspects or areas of your current product suite and reinvent those. It may take a little more time or it may be on a different technology stack. It may be inconsistent for a while, but the improvements are going to be there and are will outweigh that inconsistency. And then as we go, over time, we'll make that process change overall. But you can’t do it all at once. You have to be very pragmatic and judicious about where you start.

Gardner: Of course, as we mentioned earlier, you can adjust as you go. You have more opportunity to fix things or adjust the apps and design.
As a user, you’re never alone. We see countless other users facing the same challenges as you, with the same needs and expectations.

You also said something interesting at SAP Ariba LIVE, that designers should, “Know your users better than they know themselves.” First, what did you mean by that in more detail; and then secondly, who are the users of SAP Ariba applications and services, and how are they different from users of the past?

Sarko: What I meant by “know the users better than they know themselves” is that we're observing them, we're listening to them, we're drawing patterns across them. The user may know who they are, but they often feel like they may be alone. What we end up seeing is that as a user, you’re never alone. We see countless other users facing the same challenges as you, with the same needs and expectations.

You may just be processing invoices all day, or you may be the IT professional that now has to order all of the equipment for your organization. We start to see you as a person and the issues that you face, but then we start to figure out how we help not only you in your specific need, but we learn from others about new features and requirements that you didn't even think you might need.

So, we're looking in aggregate to find out solutions that would fit many and give it to all rather than just solve it one by one. That's what I mean by, "know your users better than they know themselves."

And then who are the users? There are different personas. Historically, SAP Ariba focused mostly only on the customer, the folks who made the purchasing decisions, who owned the business decisions. I'm trying to help the company understand that there is a shift, that we also have to pay equal attention to the end users, the people who are in the product using it everyday. As a company, SAP Ariba has to focus on the various roles and satisfy both needs in order for it to be successful.

Gardner: It must be difficult to create software for multiple roles. You mentioned the importance of being role-based in this design process. Is it that difficult to create software that has a common underpinning in terms of logic, but then effectively caters to these different roles?

Design patterns

Sarko: The way that we approach it is through building blocks and systems. We have design patterns, which are building blocks, and these little elements then get manifested together to build the experience.

Where the roles come in is what gets shown or not. Different modules may be exposed with those building blocks to one group of people, but not to the other. Based on roles and permissions, we can hide and show what’s needed. That’s how we approach the role-based design and make it right for you.

Gardner: And I suppose too one of the goals for SAP Ariba is to not just have the purchasing people do the purchasing, but have more people, more self-service. Tell me a bit more about self-service and this idea that people are shopping and not necessarily procuring.

Sarko: Yes, because this is really the shift that we're trying to communicate design for. We come to work every day with our biases from our personal lives, and it really shouldn't be all that different when talking about procurement. I mentioned earlier that this is not really about procurement for end users; it’s about shopping, because that's what you're doing when you buy things, whether you’re buying them for work or for your personal life.
The terminology has to be consistent with what we know from our daily lives and not technical jargon. Bringing those things to bear and making that experience much more consumer-like will enable our customers to be more successful.

The terminology has to be consistent with what we know from our daily lives and not technical jargon. Bringing those things to bear and making that experience much more consumer-like will enable our customers to be more successful.

Gardner: We've already seen some fruits of these labors and ideas. We saw an example of Guided Buying, a really fresh, clean interface, very similar to a business-to-consumer (B2C) shopping experience. Tell me a little bit about some of the examples we have seen and how far we are along the spectrum to getting to where you want to go.

Sarko: We're very far down the path of building this out. We've been spending the past six months developing and iterating on ideas, and we'll be able to market the first release relatively soon.

And through the process of exploration and working with customers, there have been all of kinds of nuances about policy compliance and understanding what’s allowed and what’s not allowed. And not just for the end user, but for the procurement professional, for the buyer in their specific areas, in addition to for the procurement folks behind the scenes. All of these roles now are thought of as individual players in an orchestra, because they all have to work together. We're actually quite far along, and I'm really excited to see the product come to market pretty soon.

Gardner: Any other ideas about where we go when we start bringing more reactions to what users are doing in the software? We saw instances where people were procuring things, but then the policy issue would pop-up, the declaration of, "That's not within our rules; you can’t do that."

It seems to me that if we take that a step further, we're going to start bringing in more analysis and say, "Well, you're going down this path, but we have information that could help you analyze and better make a decision." Is that something we should expect soon as well?

Better recommendations

Sarko: Yes, absolutely. We're trying to use the intelligence that we have to make better recommendations for the end users. Then, when the policy compliance comes in, we're not preventing the end user from completing their task. We're just bringing in the policy person at the other end to help alleviate that other approval, so that the users still accomplish what they started to do.

Gardner: We really are on the cusp of an interesting age, where analysis from deep-data access and deep-penetrating business intelligence types of inserts can be made into process. We're at the crossroads of process and intelligence coming together.

Before we sign off, is there anything else we should expect in terms of user experience, enhancements in business applications, particularly in the procure-to-pay process?

Sarko: This is an ongoing evolutionary process. We learn from the users each day with multiple inputs: talking to them, watching analytics, listening to customer support. The product is only going to get better with the feedback that they give us.
We're listening, learning, reacting, much more quickly than we have before. I expect that you'll see many more product changes and from all of the feedback, we’ll make it better for everyone.

Also, our release cycles now have gone from 12 to 18 months down to three months, or even shorter. We're listening, learning, reacting, much more quickly than we have before. I expect that you'll see many more product changes and from all of the feedback, we’ll make it better for everyone.

Gardner: Speaking of feedback, I was very impressed with the Feature Voting that you've instituted, allowing people to look at different requirements for the next iteration of the software and letting them vote for their favorites. Could just add a bit more about how that might impact user experience as well?

Sarko: By looking holistically at all the feedback we get, we start to see trends and patterns of the things we're getting a lot of traction on or a lot of interest in. That helps us prioritize what we call a backlog -- the feature list -- so that based on user input, we attack the areas that are most important to users and work that way.

We listen to the input, every single piece of it. Also, as you heard from last year, we launched Visual Renewal. In the product when you switch versions of the interface, you see a feedback form that you can fill out. We read every piece of that feedback. We're looking for trends about how to fix the product and make enhancements based on users. This is an ongoing process that we'll continue to do: listen, learn, and react.

Gardner: All of which would of course enhance adoption and the speed of adoption, so that’s great.

I'm afraid we'll have to leave it there. We've been discussing how self-service and consumer habits are having an impact on user experience design. I'd like to thank our guest, Michele Sarko, Chief Design Officer at SAP Ariba. Thanks so much, Michele.

Sarko: Thank you. Have a nice day.

Gardner: And a big thank you as well to our audience for joining this SAP Ariba-sponsored business innovation through leadership discussion. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: SAP Ariba.

Transcript of a discussion on how self-service and consumer habits are having an impact on user experience design for business applications. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in: