Monday, April 11, 2016

The UNIX Evolution: An History of Innovation Reaches a 20-Year Milestone

Transcript of a discussion on how UNIX has evolved over its 20-year history, and the role of The Open Group in maintaining and updating the impactful standard.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions,  your moderator for today’s panel discussion examining UNIX, a journey of innovation.

Gardner
We're here with a distinguished panel to explore the 20-year history of UNIX, an Open Group standard. Please allow me to introduce our panel: Andrew Josey, Director of Standards at The Open Group; Darrin Johnson, Director of Solaris Engineering at Oracle; Tom Mathews, distinguished engineer of Power Systems at IBM, and Jeff Kyle, Director of Mission-Critical Solutions at Hewlett Packard Enterprise.

It's not often that you reach a 20-year anniversary in information technology where the relevance is still so high and the prominence of the technology is so wide. So let me first address a question to Andrew Josey at The Open Group. UNIX has been evolving during probably the most dynamic time in business and technology.

How is it that UNIX remains so prominent, a standard that has clung to its roots, with ongoing compatibility and interoperability? How has it been able to maintain its relevance in such a dynamic world?

Andrew Josey: Thank you, Dana. As you know UNIX was started in Bell Labs by Ken Thompson and Dennis Ritchie back in 1969. It was a very innovative, a very different approach, an approach that has endured over time. We're seeing, during that time, a lot of work going on in different standards bodies.

Josey
We saw, in the early '80s, the UNIX wars, almost different fractured versions, different versions of the operating system, many of them incompatible with each other and then the standards bodies bringing them together.

We saw efforts such as the IEEE POSIX, and then X/OPEN. Later, The Open Group was formed to bring that all together when the different vendors realized the benefits of building a standard platform on which you can innovate.

So, over time, the standards have added more and more common interfaces, raising the bar upon which you can place that innovation. Over time, we've seen changes like in the mid-'90s, when there was a shift from 32-bit to 64 bit computing.

At that time, people asked, "How will we do that? Will we do it the same way?" So the UNIX vendors came to what, at that time, was X/OPEN. We had an initiative called the Large File Summit and we agreed the common way to do that. That was a very smooth transition.

Today, everybody takes it for granted that the UNIX systems are scalable, powerful, and reliable, and this is all built on that 64-bit platform, and multi-processor, and all these capabilities.

That's where we're seeing the standards come in allowing the philosophy, the enduring, adaptable pace, and that’s the UNIX platform that's relevant today. We're saying it is today’s virtualization, cloud, and big data, which is also driven by UNIX systems in the back office.

The Open Group involvement

Gardner: So while we're looking at UNIX’s 40-year history, we're focusing on the 20-year anniversary of the single UNIX specification and the ability to certify against that, and that’s what The Open Group has been involved in, right?

Josey: We were given the UNIX trademark from Novell back in, I think it was 1993, and at that point, the major vendors came together to agree on a common specification. At the time, its code name was Spec 1170. There were actually 1168 interfaces in the Spec, but we wanted to round up and, apparently, that was also the amount of money that they spent at the dinner after they completed the spec.

So, we adopted that specification and we have been running certification programs against that.

Gardner: Darrin, with the dynamic nature of our industry now -- with cloud, hybrid cloud, mobile, and a tightening between development and operations -- how is it that UNIX remains relevant, given these things that no one really saw coming 20 years ago?

Darrin Johnson: I think I can speak for everybody here that all our companies provide cloud services, whether it’s public cloud, private cloud, or hybrid cloud, and whether it’s infrastructure as a service (IaaS), software as a service (SaaS), or any of the other as a service options. The interesting thing is that to really be able to provide that consistency and that capability to our customers, we rely on a foundation -- and that foundation is UNIX.

Johnson
So our customers, even though they can maybe start with IBM, have choice. In turn, from a company perspective, instead of having to reinvent the wheel all the time for the customer or for our own internal development, it allows us to focus on the value-add, the services, the capabilities that build upon that foundation of UNIX.

So, something that may be 20 years old, or actually 40 years from the original version of UNIX, has evolved with such a solid foundation that we can innovate on.

Gardner: And what’s the common thread around that relevance? Is it the fact that it is consistently certified, that you have assurance that what's running in one place will run into another on any hardware? How is it that the common spec has been so instrumental in making this a powerful underpinning for so much modern technology?

Josey: A solid foundation is built upon standards, because we can have, like you mentioned, assurance. If you look at the certification process, there are more than 45,000 test cases that give assurance to developers, to customers that there's going to be determinism. All of the IT people that I have talked to say that a deterministic behavior is critical, because when it’s non-deterministic, things go wrong. Having that assurance enables us to focus on what sits on top of it, rather than does the ‘ls’ command work right or can we know how much space is in a file system. Those are givens. We can focus on the innovation instead.

Gardner: Over the past decades, UNIX has found itself at the highest echelon of high-performance computing, in high-performance cloud environments. Then, it goes down to the desktop as well as into mobile devices, pervasively, and as micro-devices, embedded and real-time computing. How has that also benefited from standards, that you have a common code base up and down the spectrum, from micro to macro?

Several components

Johnson: If you look at the standard, it contains several components, and it's really modular in a way that, depending on your need, you can pick a piece of it and support that. Maybe you don't need the complete operating system for a highly scalable environment. Maybe you just need a micro-controller. You can pick the standard, so there is consistency at that level, and then that feeds into the development environment in which an engineer may be developing something.

That scales. Let’s say you need a lot of other services in a large data center where you still have that consisting throughout. Whether it’s Solaris, AIX, HP-UX, Linux, or even FreeBSD, there's a consistency because of those elements of the standard.

Gardner: Developers are, of course, essentially making any platform continue over time, the chicken and the egg relationship, the more apps the more relevant the platform, the stronger and more pervasive the platform the more likely the apps. So, Jeff, for developers, what are some of the primary benefits of UNIX and how has that contributed to its longevity?

Jeff Kyle: As was said for developers, it’s the consistency that really matters. UNIX standards develop and deliver consistency. As we look at this, we talk about consistent APIs, consistent command line, and consistent integration between users and applications.

Kyle
This allows the developers to focus a lot more on interesting challenges and customer value at the application and user level. They don’t have to focus so much on interoperability issues between OSes or even interoperability issues between versions of the single OS. Developers can easily support multiple architectures in heterogeneous environments, and in today’s virtualized cloud-ready world, it’s critical.

Gardner: And while we talk about the past story with UNIX, there's a lot of runway to the future. Developers are now looking at issues around mobile development, cloud-first development. How is UNIX playing a role there?

Kyle: The development that’s coming out of all of our organizations and more organizations is focused first on cloud. It’s focused first on fully virtualized environment. It’s not just the interoperability with applications, but it is the interoperability between, as I said before, the heterogeneous environments, the multiple architectures.

In the end, customers are still trying to do the same things that they always have. They're trying to use applications in technology to get data from one place to another and more effectively and efficiently use that data to make business decisions. That’s happening more and more "mobile-y," right?

I think every HP-UX, AIX, Solaris, and UNIX system out there is fully connected to a mobile world and the Internet of Things (IoT). We're securing it more than any customers realize.

Gardner: Tom, let’s talk a little bit about the hardware side and the ability to recognize that cost and risk have a huge part of decision-making for customers, for enterprises. What is it about UNIX now, and into the future, that allows a hardware approach that keeps those cost risks down, that makes that a powerful combination for platform?

Scale up

Tom Mathews: The hardware approach for the UNIX has traditionally been scale-up. There are a lot of virtues and customer values around scale-up. It’s a much simpler environment to administer, versus the scale-out environment that’s going to have a lot more components and complexity. So that’s a big value.

Mathews
The other core value that is important to many of our customers is that there has been a very strong focus on reliability, availability, and scalability. At the end of the day, those three words are very important to our customers. I know that they're important to the people that run our systems, because having those values allows them to sleep right at night and have weekends with their families and so forth. In addition to just running the business, things have to stay up -- and it has been that way for a long time, 7×24×365.

So these three elements -- reliability and availability and scalability -- have been a big focus, and a lot of that has been delivered through the hardware environment, and in addition to the standards.

The other thing that is critical, and this is really a very important area where the standards figure in, is around investment protection. Our customers make investments in middleware and applications and they can’t afford to re-gen those investments continuously as they move through generations of operating systems and so forth.

The standards play into that significantly. They provide the stable environment. In the standards test suite right now, there are something like 45,000 tests for testing for standards. So it's stability, reliability, availability, and serviceability in this investment-protection element.

Gardner: Now, we've looked at UNIX through the lens of developers, hardware, and also performance and risk. But another thing that people might not appreciate is a close relationship between UNIX and the advancement of the Internet and the World Wide Web. The very first web servers were primarily UNIX. It was the de-facto standard. And then service providers, those folks hosting websites were hosting the Internet itself, were using UNIX for performance and reliability reasons.
Any standard, whether it’s Ethernet or UNIX, helps bring things together in a way that you don’t have to think about how to get data from one point to another.

So, Darrin, tell us about the network side of this. Why has UNIX been so prevalent along the way when the high-performance networks, and then the very important performance characteristics of a web environment, came to bear?

Johnson: Again, it’s about the interconnectedness. Back in my younger years, having to interface Ethernet with AppleTalk, with picking your various technologies, just the interfacing took so much time and effort.

Any standard, whether it’s Ethernet or UNIX, helps bring things together in a way that you don’t have to think about how to get data from one point to another. Mobility really is about moving data from one place to another in a quick fashion where you can do transactions in microseconds, milliseconds, or seconds. You want some assurance in the data that you send from one place to another. But it's also about making sure of, and this is a topic that’s really important today, security.

Knowing that when you have data going from one point to another point, it's secured and on each node, or each point, security continues, and so standards and making sure that IBM interacts with Oracle, interacts with HPE, really assures our customers. And the people that don’t even see the transactions going on, they can have some level of confidence that they're going to have reliable, high-performance, and secure networks.

Standardization certification

Gardner: Well, let’s dig a little bit into this notion of standardization certification, of putting things through their conformity paces. Some folks might be impatient going through that. They want to just get out there with the technology and use it, but a level of discipline and making sure that things work well can bear great fruit for those who are willing to go through that process.

Andrew, tell us about the standard process and how that’s changed over the past 20 years, perhaps to not only continue that legacy of interoperability, but perhaps also increase the speed and the usability of the standards process itself.

Josey: Since then, we've made quite a few changes in the way that we're doing the standards development ourselves. It used to be that a group of us would meet behind closed doors in different locations, and there were three of such groups of standard developers.

There was an IEEE group, an X/Open (later to become an Open Group group), and an International Standards Group. Often, they were same people who had to keep going to these same meetings, and seeing the same people but wearing different hats. As I said, it was very much behind closed doors.

As it got toward the end of the 1990s, people were starting to say that we were spending too much money doing the same thing, basically producing a pile of standards that were very similar but different. So in late 1997-1998, we formed something that we call the Austin Group.

It was basically The Open Group’s members. Sun, IBM, and HP came to The Open Group at that time, and said, "Look, we have to go and talk to IEEE, we have to talk to ISO about bringing all the experts together in a single place to do the standard. So starting in 1998, we met in Austin, at the IBM facility -- hence the name The Austin Group -- and we started on that road.
We do everything virtually and we've adopted some of the approaches of open source projects.

Since then, we developed a single set of books. On the front cover, we stamped the designation of it being an IEEE standard, an Open Group standard, or an International Standard. So technical folks only have to go to a single place, do the work once, and then we put it through the adoption processes of the individual organizations.

As we got into the new millennium, we changed our way as well. We don’t physically go and meet anywhere, anymore. We do everything virtually and we've adopted some of the approaches of open source projects, for example an open bug tracker (MantisBT).

Anybody can access the bug tracker file, file a bug against the standard and see all the comments that go in against a bug, so we are completely transparent. With the Austin Group, we allow anybody to participate. You don't have to be a member of IEEE or an international delegate any more to participate.

We've had lot of input and continue to have a lot of input from the open-source community. We've had prominent members of Linux and Open Source communities such as maintainers of key subsystems such as glibc command and utilities. They would come to us because they want to get involved, they see the value in standards.

They want to come to a common agreement on how the shell should work, how this utility should work, how they can pull POSIX threads and things into their environments, how they can find those edge cases. We also had innovation from Linux coming into the standard.

In the mid-2000s, we started to look at and say that new APIs in Linux should also be in UNIX. So in the mid-2000s, we added, I think, four specifications that we developed based on Linux interfaces from the GNU Project. So in the areas of internationalization and common APIs, that’s one thing we have always wanted to do is to look at raising that bar of common functionality.

Linux and open-source systems are very much working with the standard as much as anybody else.

Process and mechanics

Johnson: There's something I’d like to add about the process and the mechanics, because in my organization I own it. There are a couple of key points. One is, it’s great that we have an organization like The Open Group that not only helps create the standard or manage the standard, but is also developing the test suites for certification. So it’s one organization working with the community, Austin Group, and of course IEEE and The Open Group members to create a test certification suite.

If anyone of our organizations had to create or manage that separately, that’s a huge expense. They do that for them, that’s part of the service, and they have evolved that and it’s grown. I don’t know what it was originally, but 45,000 tests have grown, and they’ve made it more efficient in terms of the process. And it’s a collaborative process. If we have  an issue, is it our issues, is it the test read issue. There's a great responsiveness.

So kudos to The Open Group, because they make it easy for us to certify, that’s really our obligation to get into that discipline, but if we factor it into the typical quality assurance process as we release the operating system, whether it’s an update or a patch, or whatever, then it just becomes pretty obvious. The next major release that you want to certify, you've done most of the heavy lifting. Again, The Open Group makes it really easy to do that.
It’s that the standards have actually encouraged innovation in the software industry because that just made it easier for developers to develop, and it's less costly for them to provide their stuff across the broad range of platforms.

Mathews: Another element that’s important on this cost point is goes back to the standards and the cost of doing development. Imagine being a software ISV. Imagine a world where there were no standards. That world existed at one point in time. What that caused is this, ISVs had to spend significant effort to port their to each platform.

This is because the interfaces and the capabilities on all of those platforms will be different. You will see difference all of the way across. Now with the standards, of course, ISVs basically develop for only one platform: the platform defined by the standards.

So that’s been crucial. It’s that the standards have actually encouraged innovation in the software industry because that just made it easier for developers to develop, and it's less costly for them to provide their stuff across the broad range of platforms.

So that’s been crucial. We have three people from the major UNIX vendors on the panel, but there are other players there, too, and the standards have been critical over time for everybody, particularly when the UNIX market was made up of a lot of vendors.

Gardner: So we understand the value of standards and we see the role that a neutral third-party can play to keep those standards on track and moving rapidly. Are there some lessons from UNIX of the past 20 years that we can apply to some of the new areas where standards are newly needed? I'm thinking about cloud interoperability, hybrid cloud, so that you could run on-premises and then have those applications seamlessly move to a public cloud environment and back.

Andrew, starting with you, what it is about the UNIX model and The Open Group certification and standardization model that we might apply to such efforts as OpenStack, or Cloud Foundry, or some other efforts to make a seamless environment for the hybrid cloud?

Exciting problem

Josey: In our standards process, we're very much able to take on almost any problem, and this certainly would be a very exciting problem for us to tackle to bring parties together. We're able to bring different parties together, looking for commonality to try and build the consensus.

We get people in the room to talk through the different points of view. What The Open Group is able to do is to provide a safe harbor where the different vendors can come in and not be seen as talking in an anti-competitive position, but actually discussing the differences and their implementations and deciding what’s the best common way to go forward who is setting a standard.

Gardner: Anyone else on the relationship between UNIX and hybrid cloud in the next several years?

Johnson: I can talk to it a little bit. The real opportunity, and I hope people reading this, and especially the OpenStack community listens, is that true innovation can be best done on a foundation. In OpenStack, it’s a number of communities that are loosely affiliated delivering great progress, but there is interoperability, and it’s not with intent, but it’s just people are moving fast. If some foundation elements can be built, that's great for them because then we, as vendors, can more easily support the solutions that these communities are bringing to us, and then we can deliver to our customers.
In hybrid cloud environments, what UNIX brings to customers is security, reliability, and flexibility.

Cloud computing is the Wild West. We have Azure, OpenStack, AWS, and could benefit from some consistency. Now I know that each of our companies will go to great lengths to make sure that our customers don't see that inconsistency. So we bear the burden for that, but what if we could spend more time helping the communities be more successful rather than, as I mentioned before, reinventing the wheel? There is a real opportunity to have that synergy.

Kyle: In hybrid cloud environments, what UNIX brings to customers is security, reliability, and flexibility. So the Wild West comment is very true, but UNIX can present that secure, reliable foundation to a hybrid cloud environment for customers.

Gardner: Let’s look at this not just through the lens of technology but some of the more intangible human cultural issues like trust. It seems to me that, at the end of the day, what would make something successful as long as UNIX has been successful is if enough people from different ecosystems, from different vantage points, have enough trust in that process, in that technology. And through the mutual interdependency of the people in that ecosystem they keep it moving forward. So let’s look at this from the issue of trust and why we think that that's going to enable a long history for UNIX to continue.

Josey: We like to think The Open Group is a trusted party for building standards and that we hold the specification in trust for the industry and do the best thing for it. We're fully committed always to continue working in that area. We're basically the secretariat, and so we're enabling our customers to save a lot of cost. We're able to divide up the cost. If The Open Group does something once, that’s much cheaper than everybody doing the same thing themselves.

Gardner: Darrin, do you agree with my premise that trust has been an important ingredient that has allowed UNIX to be so successful? How do we keep that going?

One word: Open

Johnson: The foundation of UNIX, even going back to the original development, but certainly since standards came about is the one word “open.” You can have an open dialogue to which anybody is invited. In the case of the Austin Group, it’s everybody. In the case of any of the efforts around UNIX, it’s an open process, it’s open involvement, and in the case of The Open Group, which is kind of another open, it’s vendor-neutral. Their goal is to find a vendor-neutral solution.

Also look at this way. We have IBM, HPE, and Oracle sitting here, and I’ll say virtually Linux. Other communities that are participating are coming to mutual agreements, and this is what we believe is best.

And you know what, it’s open to disagreement. We disagree all the time, but in the end what we deliver and execute is of mutual agreement, so it’s open, it’s deterministic, and we all agree on it.

If I were a customer, IT professional, or even a developer, I'd be going, "This foundation is something on which I want to innovate, because I can trust that it will be consistent." The Open Group is not going to go away any time soon, celebrating 20 years of supporting the standard. There's going to be another 20 years.
We disagree all the time, but in the end what we deliver and execute is of mutual agreement, so it’s open, it’s deterministic, and we all agree on it.

And the great thing is that there is lot of opportunity to innovate in computer science in general, but the standard is building that foundation, taking advantage of topics like security, virtualization, mobility, and the list goes on. We even have opportunity to in a open way build something that people can trust.

Gardner: Tom, openness and trust, a good model for the next 20 years?

Mathews: It is a good model. Darrin touched on it. If we need proof of it, we have 20 years in proof of it. The Open Group has brought together major competitors and, as Darrin said, it’s always been very open, and people have always -- even with disagreement -- come to a common consensus around stuff. So The Open Group has been very effective establishing that kind of environment, that kind of trust.

Gardner: I’m afraid we'll have to leave it there. Please join me in thanking our panelists today for joining this discussion about enabling innovation through UNIX on its 20th anniversary.

Congratulations to The Open Group and to the UNIX community for that.

And also look for more information on UNIX on The Open Group website, www.opengroup.org, and thank you all for your attention and input.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: The Open Group.

Transcript of a discussion on how UNIX has evolved over its 20-year history, and the role of The Open Group in maintaining and updating the impactful standard. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Thursday, April 07, 2016

A Hit with Consumers, Digital Payments Now Catching On Across the Business World Too

Transcript of a discussion on how the popularity of digital payments in the consumer world is now spreading to the B2B payments world as well, and for good reason.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript.
Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Gardner
Our next technology innovation thought leadership discussion focuses on how digital payments are catching on for many more companies in the business world following the popularity of services like Apple Pay in the consumer world.

We'll now explore how digital payment solutions are changing the game for small companies like 487 Consulting Services, which is seeing faster and simpler payments using AribaPay. And we will hear more about how AribaPay is expanding around the globe.

With that, please join me in welcoming our guests, Drew Hofler, Senior Director of Marketing at SAP Ariba. Welcome, Drew.

Drew Hofler: Thank you, Dana, great to be here.

Gardner: We're also here with Ken Crouse, Principal Consultant and Owner at 487 Consulting Services in Folsom, California. Welcome, Ken.

Ken Crouse: Thank you very much. Appreciate it.

Gardner: And we are also here with Bill Dulin, Vice President of Commercial Payments at Discover in Chicago. Welcome, Bill.

Bill Dulin: Hey, thank you.

Gardner: Drew, for almost anything that consumers want to buy these days there's a swipe or a card chip, and we are now into wireless connectivity for payments. And yet, with business-to-business (B2B), we're still many times faxing and writing paper checks -- and it's largely still a manual process.

So why such a dichotomy between what people can do as a consumer buying gasoline, for example, and a company buying critical goods and services?

Hofler: It's fundamentally the difference between payments in B2B and the consumer world. For consumers, it's relatively simple.

Everything that you're going to buy is in a single cart at the time of payment, and it all takes place in one spot. The information and the payment itself happen together.

In the B2B world, that is just simply not the case. In the B2B world, you have an invoice that comes in for a good delivered or service rendered, and then payment may happen 30, 45, 60, 90 days later, and that payment may include more than one invoice.

Oftentimes in the B2B context, it includes hundreds of invoices on a single credit of funds into an account. So there's a huge gap between the payment and the information, and that’s what we're trying to solve. That's where the innovation needs to come, bringing that information that’s necessary for all parties to know what's being paid for, when, and why, bringing that together with the settlement of funds in a very secure environment.

Closing the gap

Gardner: But we are closing the gap. Tell us a little bit about AribaPay. How long has it been around and why is it now in a position to begin closing that gap even more rapidly than ever?

Hofler: We launched general availability of AribaPay a little over year ago, and we started here in North America. We've seen rapid growth and we just announced that we're expanding into Canada with our partner, Discover. We are also expanding, later in the year, into Europe and Latin America.

 Hofler
Even though the payment systems are different, the fundamental issue with B2B payments -- the disconnect between information and the settlement of funds -- is the same no matter where you go geographically.

So that's why we're taking it global, and why we're in a position to really change the game and innovate in B2B payments. We sit at the nexus of the digital network age, which is a very different age from where payments began.

You have electronic payments like ACH in the US (or SEPA in Europe) and these types of electronic payment schemes were created back in the '70s, based on a paradigm of that time, which was COBOL-based mainframes behind brick walls, and there was no way to connect a buyer's systems with the supplier's systems.

But now, we live in the digital age, where the Ariba network connects millions of buyers and suppliers together to transact and move terabytes of data in real time between back-end systems.

Instead of doing what B2B payments and electronic payments have done in the past, which is try to take a small subset of that information out and attach it to the payment, (using the ACH or to the SEPA formats, 140 characters in Europe, which is the same as a tweet, or 80 usable characters in the US) we're taking the payment and attaching it to all of this information that’s already on the Ariba Network, the purchase order (PO), the invoice, the reason why the invoice maybe paid a little less than was expected. All of that information is fully available on the network.

We make it visible with the payment, so that both buyers and suppliers know exactly what's being paid, why it's being paid, what this million-dollar deposit is, even if it's a thousand invoices, and why it may be a little different than the supplier was expecting. All of that is fully visible and available on the Ariba Network.

Gardner: Bill, tell us a bit about the role that Discover plays in all this. And how do you feel about the gap closing between what happens in the consumer space and what can now happen in the business space?

Facilitating payments

Dulin: I think I would like to start off with what AribaPay is not, and it's not a card offering. Usually, when people see the Discover logo, they're thinking of a credit-card offering, but this is not that. We're using our infrastructure to facilitate commercial payments.

Dulin
In that case, we’re making sure that we're gathering the bank account information, we're acting as the financial institute of record, we're boarding the suppliers, so all of that information is now in our trusted network. That's how we show up as the financial institution, as the bank. We then move the money and, as Drew talked about a little bit earlier, along with that data as well. That's really where the gap is closing. We're bringing the data and the financial transaction together.

Gardner: Drew, this is not just for large companies. It should be for any company. The long tail, if you will, the larger number of people involved, will be those small-to-medium size businesses (SMBs). Is there something in it that's different or special for them other than your Global 2000 corporations?

Hofler: It’s particularly different for the receivers of payments on that long tail. The large companies have the IT resources they need to manage the complex electronic payments that are available today. That's based on EDI and things like that, and that's great.
The midsize to the smaller suppliers simply don't have the technical resources to consume the information in those formats. They just can't do it. What AribaPay really does is it makes it as simple as possible.

But then the midsize to the smaller suppliers simply don't have the technical resources to consume the information in those formats. They just can't do it. What AribaPay really does is it makes it as simple as possible.

It is as simple as an email with the information about the payment and a link into their account in the Ariba Network that they can visibly see all the information around their payment in a very nice UI. For example, if they were expecting a $1,000 payment and they got $900, the big question is why. There may be 10 invoices on that payment.

They come in, click that link, and come right into their account on the network. They see the payment ID for that $900 that they have, and we show them exactly what was invoiced, the $1,000. You expected $1,000, but you received $900, and here exactly is where the difference is from.

They have hyperlinks to go into the invoice. They can see the comments that may have been made on how maybe something was broken on the pallet, and so they only paid for 9 items instead of 10.

All of that is a very simple online experience.

Gardner: Ken, tell me a bit about 487 Consulting Services, what you do, and then we'll ask about how you like to get paid?

One-man shop

Crouse: 487 Consulting Services is my personal business. It's a one-man shop. I literally get up in the morning, walk over and turn on the coffee pot and walk over to my desk. That's probably the best part of being an independent.

Crouse
The other side of being an independent, though, is that I'm responsible for every single aspect of the business from submitting the financial filings that we did with Discover and getting on board with everybody and actually doing the work for which I'm getting paid. It's all done by me and is controlled by me.

There is no IT department. There is no human resources department. There is no large infrastructure behind me -- it's just me. I came to SAP Ariba via a customer that said they wanted to pay me that way.

Initially, I was a little apprehensive because I was expecting that I'd have to learn a new program. I could just flash back to COBOL in college back in the '80s, and that was petrifying, but the simplicity and the transparency of SAP Ariba was just refreshing.

The first webinar I attended, although scheduled for one hour, only lasted about 30 minutes because of the simplicity and then, within a couple of days, I was able to get all my paperwork together for Discover, and I was live on Ariba within less than a week.
Now, with the Ariba Network, when it comes time to do my invoice and do it about twice a month, I open my Ariba account, identify the purchase order to be billed, click the service that's to be billed and click the submit button.

Two weeks later, I received my first series of payments through Ariba and have been now receiving payments since the first of January 2015. Ariba has processed something north of 300 invoices for me amounting to probably 500 to 600 individual tasks.

Gardner: I think there are going to be more and more folks like you, smaller businesses, independents working to provide discrete services throughout our economy, around the world, many of them working off just the smart phone.

So this is an important part of our growing economy, but also it’s important for an organization like yours to have great visibility to know when the money is coming and when to expect it. Cash flow is pretty important.

So tell me a little bit about that visibility and expectation, and how this system worked better than paper, faxes, and checks?

Previous system

Crouse: It's probably best that I just take a step back from that and review where I was before Ariba, and like you mentioned, it was a paper invoicing system. My customer required that each purchase order be on a separate piece of paper for the purposes of invoicing.

So I might create 15 or 20 invoices, put them all in the same envelope with a nice little transmittal sheet, mail them off. Then, 75 days later, when I'm not getting paid for some invoice, I would then get hold of them, and they would say "Oops, your invoice isn't in our system. And I'd start all over again. That would be a time out from work. I had to stop what I was doing, resubmit the invoice, and then start the clock all over again.

Now, with the Ariba Network, when it comes time to do my invoice and do it about twice a month, I open my Ariba account, identify the purchase order to be billed, click the service that's to be billed and click the submit button. Quite literally, the invoicing is just that simple.

Within a matter of minutes, I receive recognition that the invoice is in the system, as opposed to waiting 75 days for confirmation that it's not there. I receive a positive affirmation within just a matter of minutes.

And then, within 48 to 72 hours, I have a customer who has acknowledged and has approved that invoice for payment. At that point, I know with certainty that that payment is going to come in and on a date certain. I can forecast my cash accordingly and then go on vacation. I don't have to worry about it.
When I get the notifications of the payment being in there, it's broken down line item by line item that corresponds to the exact tasks that I have done for that particular payment. I enjoy the fact that it is all in one payment and broken out that way.

Gardner: Also, Drew mentioned this opportunity for more rich information to be associated with the transaction, remittance information for example. Have you been able to avail yourself of that and is that an important part of what you're doing, being able to see all the information associated with an invoice or a payment process?

Crouse: When I get the notifications of the payment being in there, it's broken down line item by line item that corresponds to the exact tasks that I have done for that particular payment. I enjoy the fact that it is all in one payment and broken out that way.

In the past, a year and a half ago, I might receive individual payments for all of those invoices. I'd get an envelope in the mail that might have a dozen checks in it and then, I'd have to go back and reconcile one check against one invoice. It was just a very time consuming and very clumsy effort.

The other part is that I wouldn’t necessarily get paid for all of my invoices submitted on a given date at the same time. I'd get paid for 10 of the 12 invoices and then would have to start this tail-wagging-the-dog episode of chasing around payments on the other invoices and payments. Although the majority of them might be paid in 60 days, it wasn't uncommon that they would stretch out to 120 or 150 days.

Digitizing processes

Gardner: Bill, any thoughts from the Discover perspective on the ability to not just repave cow paths, but actually do things in business that could not have been done before, given that we are digitizing these processes?

Dulin: A key for us in this, and what we haven’t talked about too much, is the compliance that’s around it. So as we are moving these payments, knowing who the customer is, anti-money laundering, all the regulatory compliance that goes around it. That makes it a more robust payment.

We become more sophisticated as the technology wraps around that payment, to know where it's going, where it should be going. If something has happened that triggers it -- it makes us stop and take a look, to make sure. Sometimes, we talk about purposeful friction. Something triggered an event that made us stop the payment and take a look around and make sure that we have it.

From our perspective in this case, it's not so much of the technology; it’s pulling that sensitive information out of enterprise resource planning (ERP) programs or other places that it shouldn't be and then putting it in a financial institution, again, using that technology around it to help secure that.

Gardner: Now, we heard a lot at the recent Ariba Live 2016 Conference about risk reduction and visibility in the supply chain, that it's really about managing your supply chain. Is there something about using AribaPay, when you have all that data associated that gives people more insight into their supply chain than they may have had, auditability, the ability to further define what it is that they want in terms of best practices, Drew?
More data is better than less data, as long as you can consume it and put it in a usable format, and that's really what we are doing.

Hofler: More data is better than less data, as long as you can consume it and put it in a usable format, and that's really what we are doing.

Knowing exactly who is being paid and removing the opportunities for fraud in the payment process is huge, and AribaPay really removes those opportunities for fraud or a vast majority of them.

We have this whole platform of information and data about the interactions between a buyer and their supplier, from the moment that they source, to when they procure, to the PO, to the invoice, to the payment going through. They can see the on-time performance and they can see how often that supplier requests early payment, if they're using Dynamic Discounting on the Ariba Network, and they can feed that back into the procurement side and start to define payment terms as a result of that at the very beginning.

Gardner: I am afraid we will have to leave it there. You've been listening to a BriefingsDirect thought leadership podcast discussion on how digital payments are catching on for many more companies in the business world. And we've seen how the popularity of digital payments in the consumer world is now spreading to the B2B payments world as well, and for good reason.

So please join me now in thanking our guests, Drew Hofler, the Senior Director of Marketing at SAP Ariba; Ken Crouse, Principal Consultant and Owner at 487 Consulting Services, and Bill Dulin, Vice President of Commercial Payments at Discover.

And a big thank you, too, to our audience for joining this SAP Ariba-sponsored business innovation thought leadership discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: SAP Ariba.

Transcript of a discussion on how the popularity of digital payments in the consumer world is now spreading to the B2B payments world as well, and for good reason. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Friday, April 01, 2016

How New Technology Trends Disrupt the Very Nature of Business

Transcript of a discussion on how major new trends and technology are translating into disruption, and for the innovative business -- opportunity.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: SAP Ariba.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Gardner
Our next technology innovation thought leadership discussion focuses on how major new trends in technology are translating into disruption, and for the innovative business -- opportunity.

From invisible robots, to drones as data servers -- from virtual reality to driverless cars -- technology innovation is faster than ever, impacting us everywhere, broadening our knowledge, and newly augmenting processes and commerce. We'll now explore the ways that these technology innovations translate into business impacts, and how consumers and suppliers of services and goods can best prepare.

To learn more about the future of business innovation, we’re joined by two guests, Greg Williams, Deputy Editor of WIRED UK, and Alex Atzberger, President of SAP Ariba.

Greg, as we see a lot of trends happening, a lot of change in the industry, people talk about the pace picking up quicker than ever. What are some of the major disrupting trends that you see in technology, and then which of those do you think are going to be the most impactful for business people?

Greg Williams: You listed a whole bunch of things, which are all incredibly important in moving forward. They're near-term in ways that sometimes people don’t consider them to be near-term. Technology shifts tend to be almost things we don’t notice. They're not happening slowly any longer; they're happening quickly, and we're almost not seeing them.

Atzberger
We talk about something like robotics. Now, you can see all kinds of incredible things. You can see in Japan a robot caregiver that can lift elderly people out of their beds and can care for them in that way. You can see slightly sinister videos from Boston Dynamics of robot dogs running along that look pretty scary. But what most innovation looks like are things that we almost don’t notice are there.

For instance, and this is a boring example, an ATM is kind of a robot; a vacuum cleaner is; an elevator is. Those are things we don’t necessarily notice. They're not as dramatic as we think.

I should just caveat that and say that everything is moving very, very quickly right now. That's why it’s hard to make very clear predictions.

The other thing that’s important is this joining up of lots of different technologies. That’s the biggest trend that I see right now. We can talk about satellites and drones, which are effectively servers in the sky or we can talk about autonomous mobility and augmented reality, but it’s all about connecting the dots.

Technology players

One thing that's interesting now is the way that car manufacturers are all technology players. Every automotive manufacturer is figuring out that what they have is a computer on wheels. They have to figure out how, when people drive into a parking lot, they make an automatic payment via the vehicle. How can the vehicle know that people’s groceries are ready to be picked up at a certain point.

Williams
Although it’s nice to list robots and autonomous vehicles and other clear technological shifts, the thing that we're really seeing is the speeding up and this coming together, this joining and connecting of the dots. Basically, all are based on three things: ubiquitous computing, mobile technology, and the cloud. Those three things underpin pretty much everything that we're going to be talking about in the next 20 minutes.

Gardner: Alex, when I hear Greg, I'm thinking business networks, although people in the consumer space might not think of them as business networks. It’s the network effect, it’s intelligence shared, it’s linking things up and allowing the pace to increase and people to share knowledge and activities. What do you see as the crossover from the consumer space in the behaviors and culture of technology and then how does that translate to the business idea of a network?

Alex Atzberger: I was recently in Dubai, and they have a Museum of the Future that they're launching this year. In the Museum of the Future, you can see what it would be like to be going to a doctor to get a new body part to jump higher or move faster. You look at these types of ideas, and the business embraces the same sort of idea. How can I augment my business to actually run smarter and be better? What are things on which I can augment myself to use data better?
You can no longer be an island as a company. You need to share ideas and innovation with others.

You can no longer be an island as a company. You need to share ideas and innovation with others. You need to be connected, and when you're connected, you can transform your business, you can do new things, you can take on new capabilities, and you can augment your business.

Companies ask us, "Now that I'm connected to a network, how can I get data out of that network to improve my business processes and do things better?" That's what they basically call the augmented enterprise, to get augmented intelligence to that business.

Gardner: We're seeing different patterns, not only in adoption, but expectations. People are seeing a mobile device tied to a cloud that has deep learning capabilities, and feedback loops that are applying the data back and forth. People are becoming ready for the next move. They want the technology to guide them. And they also don't want to take the time to learn a process; it has to be intuitive to them.

So how do these human behavioral aspects of anticipating a proactive technological helping hand impact both us in our consumer space, as well as what we would expect in our business environment?

Simplicity is key

Williams: Simplicity is absolutely key to all technology. We have to think about the end user. The end user or the customer is always the most important thing in any kind of technology process.

Going back to what Alex was talking about in terms of artificial intelligence (AI), what it’s going to allow us to do is be a lot more predictive in terms of consumer behavior and customer behavior.

If you look at something like natural language processing now, some of the startups in that space who are working with automotive manufacturers, to go back to my previous example, they will look at trends on social media and elsewhere. They can look at import and export data maybe and they can look at those predictive trends and make predictions about General Motors, their sales in the next quarter.

From the sky, we can look at parking lots at malls like Target, Costco, and Walmart and we can make predictions about how the quarterly earnings report for Walmart or whatever is going to be pretty strong this quarter.
Simplicity is absolutely key to all technology. We have to think about the end user.

What we are looking at is this constant connecting of the dots, and to Alex’s point, this incredible accumulation of data. That’s the real tough thing for businesses right now. I don’t think there’s any business out there that doesn’t understand the value of data. This phrase "big data" is one that you'll hear at every single conference, but how can we possibly parse value out of that? How can we use that data in a predictive way, rather than as a lagging indicator?

Most businesses have used data as a historical indicator. So, it's looking at sales reports or whatever other data is important within your organization. How we can use all those external factors is going to become increasingly important for businesses. Can we see how our competitors are doing by looking at the job postings that they have maybe? How can we see what their next move is in terms of manufacturing by looking at their import/export data? Can we look at the amount of money they're spending on Google AdWords and see what keywords they're spending money on?

As I said previously, it’s about connecting of the dots and bringing this information together, and also figuring it out, having someone within your organization who's not going to get overwhelmed by this data, but is curating it, and knows what’s important and what’s not important to the enterprise, because a lot of it isn’t.

Gardner: User experience plays a huge role in how we can consume and make good on this technology, on this data, on this analysis. What Greg said about simplicity can be deceiving. It might seem simple to the end user, but an awful lot has to happen in order for that effect to take place.

So Alex, one of the interesting things I've seen with SAP Ariba recently is this notion of Guided Buying. I love that word "guided," because you're anticipating the user, heading them off on complexity, but what does it take behind-the-scenes to actually make that happen?

Guided Buying

Atzberger: There’s a whole lot that it takes to get this going. The idea of Guided Buying was always that simplicity that all customers are asking us for. It’s really about how I make the user feel empowered and give the power to the user, but at the same time, embed intelligence in the software.

In our cloud applications, we thought through every step of the process, starting with monitoring how users were behaving with the system. So it’s a design thinking approach, and it starts off with deep empathy with the user. That’s the first point.

The second point is understanding what the business actually wants to accomplish, because the business actually runs a business. They have rules, methodologies, things that they want to achieve.

I was with one CPO who told me, "Alex, I look at this beautiful software, but you're making it too easy to buy. I don’t want people to just go out and buy stuff." That’s absolutely a good point, but what we're doing is embedding the logic of the buying in the enterprise into Guided Buying. That’s the difference between B2C and B2B.
The idea of Guided Buying was always that simplicity that all customers are asking us for. It’s really about how I make the user feel empowered and give the power to the user, but at the same time, embed intelligence in the software.

In B2C you can have that beautiful experience. You just want to make the experience so seamless that you drive commerce. In B2B, you want to guide the commerce, to be more relevant and fit your company goals. That requires a slightly different approach to how you solve that problem. We're obviously deeply committed to solving that problem in the context of giving users as much freedom and choice as possible while enabling the business to achieve their goals.

Williams: Alex used a really great phrase and it’s one that we actually had a discussion about in the office, which is the importance of design thinking within organizations. When you think about software or any technology, the user experience is your brand. So, it’s the people experiencing it.

Pretty much in every organization now, the "design brief" is a really important part of the organization. Maybe designers need to be brought in, whether they're software designers or in the B2C space, UX designers. They need to have a seat at the top table these days, because they're such an integral part of defining any kind of brand.

Atzberger: We hire a lot of designers into SAP Ariba, but interestingly, a lot of the engineers come and say they need to think about design as well. So, it’s not like design is still a separate department. At one point, design becomes part of what we call a scrum team that basically builds the software, and an engineer should have a point of view as well in terms of what is good design.

You could argue that there are some sites that don’t necessarily look pretty, but they're really easy to use. So, it’s not just about the visualization and the fonts, etc.; it’s about also how many clicks and the logic behind it. That’s where product people want to be product people. They don’t want to just be engineers or just designers.

Important element

Gardner: I suppose another important element to this is not only that user experience where one-size-fits-all, but a user experience where customization is brought to bear, and because of the technology, because of the intelligence, access to a cloud infrastructure, we can do that. There are examples of customization at the individual worker level, where role-based and policy-based approaches can do that.

We're also seeing with the SAP Ariba cloud, you're bringing master data, vendor data, for example, into the cloud, cleansing it, making it usable, but still keeping it germane to that particular company, so that this isn’t just a business app for everyone. Let’s delve a little bit into this idea of customization specifically to a company and then even down to the individual user. How is that so important now in business applications, Alex?

Atzberger: The premise of the cloud was always speed. What you gave up for the speed was the ability to customize, especially in enterprise systems. What we're now saying is that you can have a level of individuality and things that are important to you, either through configuration or through extending the platform that you're on.

That’s the power of the technology that comes to bear when you look at platforms today. If you look at Amazon Web Services or what SAP is doing with the HANA Cloud Platform, it’s essential, because it gives the capabilities to companies to actually customize further.

At the same time, we have a concept of the private and the public persona, because at the end of the day, there is some data that’s private to a company and then there's data that's publicly shared. We need to be very sensitive of what data is relevant and in what context.

Gardner: Greg, one of the areas where business can get out in front of the technology curve is this idea of customization and anticipatory or predictive analytics’ benefits. It seems that we're only scratching the surface here. When I go on Netflix, they still can’t pick shows that I really want to watch. When I go to Amazon and they have My Box or My Stuff, it's really just things I already bought with a little bit of augmentation.
What we're now saying is that you can have a level of individuality and things that are important to you, either through configuration or through extending the platform that you're on.

If we can take this to the full potential of customization, and I think businesses can because they have access to the data and they can be policy-based and in probably a better way than a mass consumer environment could, what’s the potential here, when the machines can really start getting us customization, predictive analytics, and apply that to how we get productive in our business sense? It strikes me as something quite significant?

Williams: Yeah, it is. I was talking to someone in a California startup who is developing a sales tool. This person worked for many years in a very large enterprise that builds CRM software. His new business is very interesting because he's trying to do what you described. He's trying to do it almost being a search engine for the entire business Internet. I know this has to be verified, but their claim is that they are much more efficient than regular salespeople.

Say you're trying to sell your software product into a telco. You'll spend a lot of time learning about the person who purchases, those services. You'll go to conferences, read blogs, develop networks, and put a lot of effort into this process.

His startup suggests that they'll be able to not only identify the companies that you're able to sell into, but they'll be able to identify the actual individuals. It will become a lot more detailed in terms of this is what they're interested in and this is what they're not interested in. This is the conference that they've been to. Increasingly, we'll have more-and-more intelligence on people, their habits, their preferences, their interests, and their connections.

Creative business

Take your Netflix example. Netflix moves simply from being a content delivery service to being a creative business by looking at this kind of Venn diagram of its users interests. They saw that there was a sweet spot that overlapped with Kevin Spacey, David Fincher, and the original House of Cards from the UK. They saw that there’s this huge amount of people who love those three things. They said, "Great. Let's commission this series."

Every time that users interact with the service, it's helping to improve it. Netflix knows what you watch, when you watch it, where you stop, where you don’t finish, where you fast-forward, and where you rewind. So, they're collecting huge amounts of data that can be used not just to understand consumer behavior, but also to get insights that can be used for decisions around content.

Gardner: So, Alex, translate this to the business environment, the business network that your company is aligned with can be the determiner of how effective this new trend towards customization, anticipation and being more of a science than an art for sales for example that Greg mentioned is. This to me says the right network with the right information is a crucial decision for you. How does that work in terms of companies differentiating themselves based on who they work with in their ecosystem?
What we see a lot is that businesses are connecting to networks to conduct global business, to find new market opportunities, and become much better at actually mining and understanding that data to become more pointed in terms of what solutions they actually want to provide to the market.

Atzberger: First of all, any company that engages in a network and then captures the data to make better business decisions is already on that journey. If you look at the social networks today, if you like three things on Facebook, Facebook knows more about you than your best friend. If you like more than 10 things, Facebook knows more about you than your spouse. That’s the logic, and the same happens in business networks as well.

What we see a lot is that businesses are connecting to networks to conduct global business, to find new market opportunities, and become much better at actually mining and understanding that data to become more pointed in terms of what solutions they actually want to provide to the market.

But we're still at the very beginning of this trend. We're working with companies on enabling Data as a Service, where they leverage the data itself to create more insight into their business, pursue better business opportunities, change their product offering actually and innovate with their supplier base. If we do that, we're impacting real change, and that's absolutely feasible today, but we're still early on.

Gardner: Any examples, Alex, of companies that really get this and that are showing some demonstrable benefits, that are really tagging on innovation to what their businesses were traditionally, but taking it in a new direction based on some of these technological benefits that we’ve talked about -- poster children for innovation perhaps.

Atzberger: When I think about poster children for innovation, I think about companies that are really looking to the network as infrastructure. What are the other things I can do through this network in order to change my business or add new capabilities?

What I love is when we have customers who talk about the fact that they can actually change their industry. Or their entire supply chain. We have a one high-tech manufacturer who thinks about how they can get demand signals much faster to their supplier base so they can actually impact the end customer. I like that thinking a lot.

Gardner: Greg, last thoughts on what's to come, how technology and business combine to transform how we get things done and perhaps even improve our quality of life?

Solving big problems

Williams: That’s obviously the fundamental end result, one hopes, of all technological change -- that people have better lives and we solve big problems. Looking forward, we're going to see, as Alex has been describing, a real joining of the dots. There aren’t necessarily going to be things that are dramatic, but we're going to see increasing amounts of AI, for instance, offering us insights in industries such as healthcare that only machines are capable of determining because of the sheer volume of data that they can analyze.

I was talking to a guy who worked in the security industry recently. They do a lot of work for the Pentagon. He was telling me that they did an analysis of tweets about ISIL during one week in August last year and they noticed that most of them were about security or the security situation in various parts of Northern Iraq and Syria, account promotion, religion, and strategic updates, but then they came across an outlier that they never noticed before.
That’s obviously the fundamental end result, one hopes, of all technological change -- that people have better lives and we solve big problems.

The official ISIS accounts were re-tweeting any mention of female fighters or women in ISIL -- there was clearly a big push by ISIL to recruit women. What happens? Six weeks later, we had the first female suicide bomber in Europe in Paris. Now, those things probably are not linked, but I think we're able to see things in the data now that we have never been able to see before and I think they increasingly will be putting those things to use.

Gardner: I’m afraid we’ll have to leave it there. You’ve been listening to a BriefingsDirect thought leadership podcast discussion on how major new trends and technology are translating into disruption, and for the innovative business -- opportunity. And we’ve heard how technology innovations translate into business impacts, and how consumers and suppliers of services and goods can best prepare.

So please join me in thanking our guests, Greg Williams, Deputy Editor, WIRED UK in London, and Alex Atzberger, President of SAP Ariba.

And a big thank you too to our audience for joining this SAP Ariba-sponsored business innovation thought leadership discussion.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: SAP Ariba.

Transcript of a discussion on how major new trends and technology are translating into disruption, and for the innovative business -- opportunity. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Tuesday, March 15, 2016

How HPE’s Internal DevOps Paved the Way for Speed in Global Software Delivery

Transcript of a BriefingsDirect discussion on how HPE finds the sweet spot for continuous development and delivery of software products.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the HPE Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Our next DevOps case study explores how HPE’s internal engineering and IT organizations, through research and development, are exploiting the values and benefits of DevOps methods and practices.

To help us better understand the way that DevOps can aid in the task of product and technology development, please join me in welcoming our guest, James Chen, Vice President of Product Development and Engineering at HPE. Welcome, James.

James Chen: Thank you, thank you for having me.

Gardner: First tell us a little bit about the scale of the organization. Clearly HPE is a technology company, has a very large internal IT organization, perhaps one of the largest among the global 2000.
DevOps: Solutions That Accelerate Business Innovation
And Meet Market Demands
Learn More Here
Chen: We have a pretty sizeable IT organization, as you can imagine. We support all the HPE products and solutions serving our customers. We have about 8,000 to 9,000 employees, and we have a pretty large landscape of applications, something like 2,500 enterprise-scale large applications.

Chen
We also have a six data centers that host all the applications. So it's a pretty complicated infrastructure. DevOps means a lot to us because of the speed and agility that our customers are looking for, and that’s where we embarked on our journey to DevOps.

Gardner: Tell us about that journey. How long has it been? How did you get started? Maybe you can offer how you define DevOps, because it is a little bit of loose topic in how people understand it and define it.

Chen: We've been on the DevOps journey for the last couple of years. A certain part of the organization, the developer team, already practiced somehow, somewhere, in different aspects of DevOps. Someone was driving the complete automation of testing. Someone was doing a kind of Continuous Integration and Continuous Delivery (CICD), but it never came down to the scale that we believed would start impacting the overall enterprise application landscape.

Some months ago IT embarked on what we called a pilot program for DevOps. We wanted to be the ones doing DevOps in HPE, and the only way you can benefit from DevOps and understand DevOps -- the implications of DevOps on the IT organization -- is just go out and do it. So we picked some of the very complicated applications, believing that if we could do the pilot well, we would learn a lot as an organization, and it would be helpful to the future of the IT organization and deliver value to the business.

We also believed that our learning ad experiences could help HPE’s customers to be successful. We believe that every single IT shop is thinking about how they can go to DevOps and how they can increase speed and agility. But they all have to start somewhere. So our journey would be a good story to share with our peers.

Inception point

Gardner: Given that HPE has so many different products, hardware and software, what is that you did to find that right inception point. You have a very large inventory of potential places and ways that you could start your DevOps journey. What turned out to be a good place to start that then allowed you expand successfully?

Chen: We believed the easiest way was to start with some of the home-grown applications. We chose home-grown applications because it’s a little bit easier, simply because you don’t have the scale of vendor/ISP dependence to work with.

We decided to pick a handful of applications. Most of them are very complicated and some of them are very important. A good example is the OneNote application. This is the support automation application, which touches every device, every part that we ship to our customers. That application is essentially the collection point for performance data for all the devices in the customer data center, how we monitor them, and how we deal with them.

It’s what I consider a very important enterprise scale application and it’s mission critical. That was one of the criteria, pick an application that is really complicated and most likely home-grown. The other criteria was to pick an application that the application team itself already practiced, were ready to do something, and really wanted to embrace that new methodology and new idea.

The reason behind that is that we didn't want to set up a separate team to do DevOps to pair with the existing the developer team. Ideally, we wanted the existing developer team to go into that transformation. They became the transformation driver to take the old way to do DevOps into the new DevOps. So that was second criteria, the team, the people themselves had to be motivated and get ready for a change.

The third one was the application scale and impact. We understood the risk and we understood the implications. The better understanding you have, it's easy to get buy-in from your business partners and your executive team. That’s what we chose as the criteria as far as going into DevOps.

Gardner: I'm really curious. Given this super important application for HPE, how is performance measured and managed across all of these deployments, applying DevOps methodology, and getting that team buy-in? What did it earn you? What’s the payoff? What did you see that made DevOps worthwhile?
DevOps: Solutions That Accelerate Business Innovation
And Meet Market Demands
Learn More Here
Chen: With DevOps we captured three dimensions. One is collaboration. What I mean by collaboration is taking operations into development and taking development into operations, so the operations and development teams are working side-by-side. That’s the new relationship of the collaboration.

The traditional way you did this was by the developer finishing the product and then throwing it over the wall to the operations guy. Then, when something goes wrong, we start freaking out, asking who owned the issue.

The new way is a very close collaboration between the development team and operations. From the get-go, when we start to design a product or software application, we already have people who are running the operation. They run the support in the team by understanding the risk and the implications for the operation. So that’s one dimension, the collaboration.

The second piece is about automation. You want to figure out a way that you can automate end-to-end. That’s very important. You asked a very good question about how to get buy-in from your business partners who ask, "I'm going to do CICD. What is the implication if something goes wrong?"

Powerful weapon

Automation has become a very powerful weapon, because when you can automate development, the deployment process becomes much easier to roll back when something goes wrong. Because that’s a small incremental change that you're making every time, the impact is much easier to understand. We believe the down time is much less than the normal way of doing the process. That’s the second dimension, automation.

The third one is codification. Codification is that everything is code. The old way was to define your infrastructure and have someone manually put all the infrastructure together to run an application. Those times are over.

Full DevOps is that you are able to drive a code that’s easy to configure, have your infrastructure provisioned based on that code, and get ready to run an application.

So DevOps consists of those three things. It’s truly important, the way we talk about it and the way we understand DevOps: collaboration, the codification, and automation.

Having said that, there are other implications about the organization and contingency. Those have a very profound impact on our IT organization. That’s where we understand DevOps and we're using that kind of methodology. Our thinking is to take it to the stakeholder and the customer, and show them the benefit that we're able to deliver for them. That’s the reason we get the buy-in support from the get-go.
Of course the quality, high availability, and agility have significantly improved. But I would really focus on speed.

Gardner: Is speed the number one reason to do this, or is it quality or security? What is the biggest reward when you do this well?

Chen: Speed is probably the number one reason to go to DevOps. Of course the quality, high availability, and agility have significantly improved. But I would really focus on speed, because if you ask any business owner, business partner, or your customer today, the number one challenge for them is speed.

Early in our conversation, I mentioned about automation. Traditionally we do a release every six months, because it's so complicated, as you can imagine. We have products from storage, network, server – hardware and software. If we make platform changes, in order to cover all those customers, devices, and products, it required pretty much six months to do.

Since you have the six month cycle, products issued to your customer before the next release will not have the best support on the host automation capability.

The performance of our service quality has a significant impact on customer satisfaction. Now we're talking about a release every two weeks. That’s a significant improvement, and you can see customers are happy because now with every product release, they have the automation capability within two weeks. You immediately have the best monitor and proactive care capability that we provide to our customers.

Bottom line

Gardner: I should think that that also has an impact on the bottom line, because you're able to bring new features and functions to the market, add more value to the products, and then charge more money for it. So, it allows you to get the value of your organization in to your bottom line although faster as well.

Chen: Yes. For example, we want to deliver any product or service that has a call-home capability, do the support automation, and proactively take care of them, within two weeks.  It's a huge advantage for us, because the competition typically take a few days to a couple of weeks just to install everything.

That two weeks is probably the best timing optimized for this kind of service scheme. Can we push this to one week or a few days? It's possible, but the return on investment may not be on day one.

For every application, when you make the call about DevOps, it’s not about wanting to do it as fast as possible. You want to examine your business case and determine, “what’s the sweet spot for us with DevOps?” In this particular case, we believe that looking at the customer feedback and business partners' feedback, two weeks is the right spot for us. That's significantly better than what we use to have, every six months.
DevOps: Solutions That Accelerate Business Innovation
And Meet Market Demands
Learn More Here
Gardner: I'm afraid we will have to leave it there. We've been learning how HPE’s internal engineering organization explores the values and benefits of DevOps methods and practices. I'd like to thank our guest, James Chen, Vice President of Product Development and Engineering at HPE. Thank you, James.

Chen: Thank you so much for having me.

Gardner: And I'd also like to extend a big thank you to our audience for joining this special DevOps innovation case study discussion. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a BriefingsDirect discussion on how HPE finds the sweet spot for continuous development and delivery of software products. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:


  • IoT plus big data analytics translate into better services management at Auckland Transport
  • Extreme Apps approach to analysis makes on-site retail experience king again
  • How New York Genome Center Manages the Massive Data Generated from DNA Sequencing
  • Microsoft sets stage for an automated hybrid cloud future with Azure Stack Technical Preview
  • The Open Group president, Steve Nunn, on the inaugural TOGAF User Group and new role of EA in business transformation
  • Redmonk analysts on best navigating the tricky path to DevOps adoption
  • DevOps by design--A practical guide to effectively ushering DevOps into any organization
  • Need for Fast Analytics in Healthcare Spurs Sogeti Converged Solutions Partnership Model
  • HPE's composable infrastructure sets stage for hybrid market brokering role
  • Nottingham Trent University Elevates Big Data's role to Improving Student Retention in Higher Education
  • Forrester analyst Kurt Bittner on the inevitability of DevOps
  • Agile on fire: IT enters the new era of 'continuous' everything
  •