Showing posts with label hyperconverged infrastructure. Show all posts
Showing posts with label hyperconverged infrastructure. Show all posts

Wednesday, July 03, 2019

Using AI to Solve Data and IT Complexity -- And Better Enable AI

A discussion on how the rising tidal wave of data must be better managed, and how new tools are emerging to bring artificial intelligence to the rescue.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Innovator podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest in IT innovation.

Our next discussion focuses on why the rising tidal wave of data must be better managed, and how new tools are emerging to bring artificial intelligence (AI) to the rescue. Stay with us now as we learn how the latest AI innovations improve both data and services management across a cloud deployment continuum -- and in doing so set up an even more powerful way for businesses to exploit AI.

To learn how AI will help conquer complexity to allow for higher abstractions of benefits from across all sorts of data for better analysis, please join me in welcoming Rebecca Lewington, Senior Manager of Innovation Marketing at Hewlett Packard Enterprise (HPE). Welcome to BriefingsDirect, Rebecca.

Rebecca Lewington: Hi, Dana. It’s very nice to talk to you.

Gardner: We have been talking about massive amounts of data for quite some time. What’s new about data buildup that requires us to look to AI for help?

Lewington: Partly it is the sheer amount of data. IDC’s Data Age Study predicts the global data sphere will be 175 zettabytes by 2025, which is a rather large number. That’s what, 1 and 21 zeros? But we have always been in an era of exploding data.

Yet, things are different. One, it’s not just the amount of data; it’s the number of sources the data comes from. We are adding in things like mobile devices, and we are connecting factories’ operational technologies to information technology (IT). There are more and more sources.

Also, the time we have to do something with that data is shrinking to the point where we expect everything to be real-time or you are going to make a bad decision. An autonomous car, for example, might do something bad. Or we are going to miss a market or competitive intelligence opportunity.

So it’s not just the amount of data -- but what you need to do with it that is challenging.

Gardner: We are also at a time when Al and machine learning (ML) technologies have matured. We can begin to turn them toward the data issue to better exploit the data. What is new and interesting about AI and ML that make them more applicable for this data complexity issue?

Data gets smarter with AI

Lewington: A lot of the key algorithms for AI were actually invented long ago in the 1950s, but at that time, the computers were hopeless relative to what we have today; so it wasn’t possible to harness them.

For example, you can train a deep-learning neural net to recognize pictures of kittens. To do that, you need to run millions of images to train a working model you can deploy. That’s a huge, computationally intensive task that only became practical a few years ago. But now that we have hit that inflection point, things are just taking off.

Gardner: We can begin to use machines to better manage data that we can then apply to machines. Does that change the definition of AI?

Lewington: The definition of AI is tricky. It’s malleable, depending on who you talk to. For some people, it’s anything that a human can do. To others, it means sophisticated techniques, like reinforcement learning and deep learning.
How to Remove Complexity
From Multicloud and Hybrid IT
One useful definition is that AI is what you use when you know what the answer looks like, but not how to get there.

Traditional analytics effectively does at scale what you could do with pencil and paper. You could write the equations to decide where your data should live, depending on how quickly you need to access it.

But with AI, it’s like the kittens example. You know what the answer looks like, it’s trivial for you to look at the photograph and say, “That is a cat in the picture.” But it’s really, really difficult to write the equations to do it. But now, it’s become relatively easy to train a black box model to do that job for you.

Gardner: Now that we are able to train the black box, how can we apply that in a practical way to the business problem that we discussed at the outset? What is it about AI now that helps better manage data? What's changed that gives us better data because we are using AI?
The heart of what makes AI work is good data; the right data, in the right place, with the right properties you can use to train a model, which you can then feed new data into to get results that you couldn't get otherwise.

Lewington: It’s a circular thing. The heart of what makes AI work is good data; the right data, in the right place, with the right properties you can use to train a model, which you can then feed new data into to get results that you couldn’t get otherwise.

Now, there are many ways you can apply that. You can apply it to the trivial case of the cat we just talked about. You can apply it to helping a surgeon review many more MRIs, for example, by allowing him to focus on the few that are borderline, and to do the mundane stuff for him.

But, one of the other things you can do with it is use it to manipulate the data itself. So we are using AI to make the data better -- to make AI better.

Gardner: Not only is it circular, and potentially highly reinforcing, but when we apply this to operations in IT -- particularly complexity in hybrid cloud, multicloud, and hybrid IT -- we get an additional benefit. You can make the IT systems more powerful when it comes to the application of that circular capability -- of making better AI and better data management.

AI scales data upward and outward

Lewington: Oh, absolutely. I think the key word here is scale. When you think about data -- and all of the places it can be, all the formats it can be in -- you could do it yourself. If you want to do a particular task, you could do what has traditionally been done. You can say, “Well, I need to import the data from here to here and to spin up these clusters and install these applications.” Those are all things you could do manually, and you can do them for one-off things.

But once you get to a certain scale, you need to do them hundreds of times, thousands of times, even millions of times. And you don’t have the humans to do it. It’s ridiculous. So AI gives you a way to augment the humans you do have, to take the mundane stuff away, so they can get straight to what they want to do, which is coming up with an answer instead of spending weeks and months preparing to start to work out the answer.

Gardner: So AI directed at IT, what some people call AIOps could be an accelerant to this circular advantageous relationship between AI and data? And is that part of what you are doing within the innovation and research work at HPE?

Lewington: That’s true, absolutely. The mission of Hewlett Packard Labs in this space is to assist the rest of the company to create more powerful, more flexible, more secure, and more efficient computing and data architectures. And for us in Labs, this tends to be a fairly specific series of research projects that feed into the bigger picture.

For example, we are now doing the Deep Learning Cookbook, which allows customers to find out ahead of time exactly what kind of hardware and software they are going to need to get to a desired outcome. We are automating the experimenting process, if you will.

And, as we talked about earlier, there is the shift to the edge. As we make more and more decisions -- and gain more insights there, to where the data is created -- there is a growing need to deploy AI at the edge. That means you need a data strategy to get the data in the right place together with the AI algorithm, at the edge. That’s because there often isn’t time to move that data into the cloud before making a decision and waiting for the required action to return.

Once you begin doing that, once you start moving from a few clouds to thousands and millions of endpoints, how do you handle multiple deployments? How do you maintain security and data integrity across all of those devices? As researchers, we aim to answer exactly those questions.

And, further out, we are looking to move the natural learning phase itself to the edge, to do the things we call swarm learning, where devices learn from their environment and each other, using a distributed model that doesn’t use a central cloud at all.

Gardner: Rebecca, given your title is Innovation Marketing Lead, is there something about the very nature of innovation that you have come to learn personally that’s different than what you expected? How has innovation itself changed in the past several years?

Innovation takes time and space 

Lewington: I began my career as a mechanical engineer. For many years, I was offended by the term innovation process, because that’s not how innovation works. You give people the space and you give them the time and ideas appear organically. You can’t have a process to have ideas. You can have a process to put those ideas into reality, to wean out the ones that aren’t going to succeed, and to promote the ones that work.
How to Better Understand
What AI Can do For Your Business
But the term innovation process to me is an oxymoron. And that’s the beautiful thing about Hewlett Packard Labs. It was set up to give people the space where they can work on things that just seem like a good idea when they pop up in their heads. They can work on these and figure out which ones will be of use to the broader organization -- and then it’s full steam ahead.

Gardner: It seems to me that the relationship between infrastructure and AI has changed. It wasn’t that long ago when we thought of business intelligence (BI) as an application -- above the infrastructure. But the way you are describing the requirements of management in an edge environment -- of being able to harness complexity across multiple clouds and the edge -- this is much more of a function of the capability of the infrastructure, too. Is that how you are seeing it, that only a supplier that’s deep in its infrastructure roots can solve these problems? This is not a bolt-on benefit.

Lewington: I wouldn’t say it’s impossible as a bolt-on; it’s impossible to do efficiently and securely as a bolt-on. One of the problems with AI is we are going to use a black box; you don’t know how it works. There were a number of news stories recently about AIs becoming corrupted, biased, and even racist, for example. Those kinds of problems are going to become more common.

And so you need to know that your systems maintain their integrity and are not able to be breached by bad actors. If you are just working on the very top layers of the software, it’s going to be very difficult to attest that what’s underneath has its integrity unviolated.

If you are someone like HPE, which has its fingers in lots of pies, either directly or through our partners, it’s easier to make a more efficient solution.
You need to know that your systems maintain their integrity and are not able to be breached by bad actors. If you are just working on the very top layers of the software, it's going to be very difficult to attest that what's underneath has its integrity unviolated.

Gardner: Is it fair to say that AI should be a new core competency, for not only data scientists and IT operators, but pretty much anybody in business? It seems to me this is an essential core competency across the board.

Lewington: I think that's true. Think of AI as another layer of tools that, as we go forward, becomes increasingly sophisticated. We will add more and more tools to our AI toolbox. And this is one set of tools that you just cannot afford not to have.

Gardner: Rebecca, it seems to me that there is virtually nothing within an enterprise that won't be impacted in one way or another by AI.

Lewington: I think that’s true. Anywhere in our lives where there is an equation, there could be AI. There is so much data coming from so many sources. Many things are now overwhelmed by the amount of data, even if it’s just as mundane as deciding what to read in the morning or what route to take to work, let alone how to manage my enterprise IT infrastructure. All things that are rule-based can be made more powerful, more flexible, and more responsive using AI.

Gardner: Returning to the circular nature of using AI to make more data available for AI -- and recognizing that the IT infrastructure is a big part of that -- what are doing in your research and development to make data services available and secure? Is there a relationship between things like HPE OneView and HPE OneSphere and AI when it comes to efficiency and security at scale?

Let the system deal with IT 

Lewington: Those tools historically have been rules-based. We know that if a storage disk gets to a certain percentage full, we need to spin up another disk -- those kinds of things. But to scale flexibly, at some point that rules-based approach becomes unworkable. You want to have the system look after itself, to identify its own problems and deal with them.

Including AI techniques in things like HPE InfoSight, HPE Clearpath, and network user identity behavior software on the HPE Aruba side allows the AI algorithms to make those tools more powerful and more efficient.

You can think of AI here as another class of analytics tools. It’s not magic, it’s just a different and better way of doing IT analytics. The AI lets you harness more difficult datasets, more complicated datasets, and more distributed datasets.

Gardner: If I’m an IT operator in a global 2000 enterprise, and I’m using analytics to help run my IT systems, what should I be thinking about differently to begin using AI -- rather than just analytics alone -- to do my job better?

Lewington: If you are that person, you don’t really want to think about the AI. You don’t want the AI to intrude upon your consciousness. You just want the tools to do your job.

For example, I may have 1,000 people starting a factory in Azerbaijan, or somewhere, and I need to provision for all of that. I want to be able to put on my headset and say, “Hey, computer, set up all the stuff I need in Azerbaijan.” You don’t want to think about what’s under the hood. Our job is to make those tools invisible and powerful.

Composable, invisible, and insightful 

Gardner: That sounds a lot like composability. Is that another tangent that HPE is working on that aligns well with AI?

Lewington: It would be difficult to have AI be part of the fabric of an enterprise without composability, and without extending composability into more dimensions. It’s not just about being able to define the amount of storage and computer networking with a line of code, it’s about being able to define the amount of memory, where the data is, where the data should be, and what format the data should be in. All of those things – from the edge to cloud – need to be dimensions in composability.
How to Achieve Composability
Across Your Datacenter
You want everything to work behind the scenes for you in the best way with the quickest results, with the least energy, and in the most cost-effective way possible. That’s what we want to achieve -- invisible infrastructure.

Gardner: We have been speaking at a fairly abstract level, but let’s look to some examples to illustrate what we’re getting at when we think about such composability sophistication.

Do you have any concrete examples or use cases within HPE that illustrate the business practicality of what we’ve been talking about?

Lewington: Yes, we have helped a tremendous number of customers either get started with AI in their operations or move from pilot to volume use. A couple of them stand out. One particular manufacturing company makes electronic components. They needed to improve the yields in their production lines, and they didn’t know how to attack the problem. We were able to partner with them to use such things as vision systems and photographs from their production tools to identify defects that only could be picked up by a human if they had a whole lot of humans watching everything all of the time.

This gets back to the notion of augmenting human capabilities. Their machines produce terabytes of data every day, and it just gets turned away. They don’t know what to do with it.

We began running some research projects with them to use some very sophisticated techniques, visual autoencoders, that allow you, without having a training set, to characterize a production line that is performing well versus one that is on the verge of moving away from the sweet spot. Those techniques can fingerprint a good line and also identify when the lines go just slightly bad. In that case, a human looking at line would think it was working just perfectly.

This takes the idea of predictive maintenance further into what we call prescriptive maintenance, where we have a much more sophisticated view into what represents a good line and what represents a bad line. Those are couple of examples for manufacturing that I think are relevant.

Gardner: If I am an IT strategist, a Chief Information Officer (CIO) or a Chief Technology Officer (CTO), for example, and I’m looking at what HPE is doing -- perhaps at the HPE Discover conference -- where should I focus my attention if I want to become better at using AI, even if it’s invisible? How can I become more capable as an organization to enable AI to become a bigger part of what we do as a company?

The new company man is AI

Lewington: For CIOs, their most important customers these days may be developers and increasingly data scientists, who are basically developers working with training models as opposed to programs and code. They don’t want to have to think about where that data is coming from and what it’s running on. They just want to be able to experiment, to put together frameworks that turn data into insights.

It’s very much like the programming world, where we’ve gradually abstracted things from bare-metal, to virtual machines, to containers, and now to the emerging paradigm of serverless in some of the walled-garden public clouds. Now, you want to do the same thing for that data scientist, in an analogous way.

Today, it’s a lot of heavy lifting, getting these things ready. It’s very difficult for a data scientist to experiment. They know what they want. They ask for it, but it takes weeks and months to set up a system so they can do that one experiment. Then they find it doesn’t work and move on to do something different. And that requires a complete re-spin of what’s under the hood.

Now, using things like software from the recent HPE BlueData acquisition, we can make all of that go away. And so the CIO’s job becomes much simpler because they can provide their customers the tools they need to get their work done without them calling up every 10 seconds and saying, “I need a cluster, I need a cluster, I need a cluster.”

That’s what a CIO should be looking for, a partner that can help them abstract complexity away, get it done at scale, and in a way that they can both afford and that takes the risk out. This is complicated, it’s daunting, and the field is changing so fast.

Gardner: So, in a nutshell, they need to look to the innovation that organizations like HPE are doing in order to then promulgate more innovation themselves within their own organization. It’s an interesting time.

Containers contend for the future 

Lewington: Yes, that’s very well put. Because it’s changing so fast they don’t just want a partner who has the stuff they need today, even if they don’t necessarily know what they need today. They want to know that the partner they are working with is working on what they are going to need five to 10 years down the line -- and thinking even further out. So I think that’s one of the things that we bring to the table that others can’t.

Gardner: Can give us a hint as to what some of those innovations four or five years out might be? How should we not limit ourselves in our thinking when it comes to that relationship, that circular relationship between AI, data, and innovation?

Lewington: It was worth coming to HPE Discover in June, because we talked about some exciting new things around many different options. The discussion about increasing automation abstractions is just going to accelerate.
We are going to get to the point where using containers seems as complicated as bare-metal today and that's really going to help simplify the whole data pipelines thing.

For example, the use of containers, which have a fairly small penetration rate across enterprises, is at about 10 percent adoption today because they are not the simplest thing in the world. But we are going to get to the point where using containers seems as complicated as bare-metal today and that’s really going to help simplify the whole data pipelines thing.

Beyond that, the elephant in the room for AI is that model complexity is growing incredibly fast. The compute requirements are going up, something like 10 times faster than Moore’s Law, even as Moore’s Law is slowing down.

We are already seeing an AI compute gap between what we can achieve and what we need to achieve -- and it’s not just compute, it’s also energy. The world’s energy supply is going up, can only go up slowly, but if we have exponentially more data, exponentially more compute, exponentially more energy, and that’s just not going to be sustainable.

So we are also working on something called Emergent Computing, a super-energy-efficient architecture that moves data around wherever it needs to be -- or not move data around but instead bring the compute to the data. That will help us close that gap.
How to Transform
The Traditional Datacenter
And that includes some very exciting new accelerator technologies: special-purpose compute engines designed specifically for certain AI algorithms. Not only are we using regular transistor-logic, we are using analog computing, and even optical computing to do some of these tasks, yet hundreds of times more efficiently and using hundreds of times less energy. This is all very exciting stuff, for a little further out in the future.

Gardner: I’m afraid we’ll have to leave it there. We have been exploring how the rising tidal wave of data must be better managed and how new tools are emerging to bring AI to the rescue. And we’ve heard how new AI approaches and tools create a virtuous adoption pattern between better data and better analytics, and therefore better business outcomes.

So please join me in thanking our guest, Rebecca Lewington, Senior Manager for Innovation Marketing at HPE. Thank you so much, Rebecca.

Lewington: Thanks Dana, this was fun.

Gardner: And thank you as well to our audience for joining this BriefingsDirect Voice of the Innovator interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored discussions. Thanks again for listening, please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

A discussion on how the rising tidal wave of data must be better managed, and how new tools are emerging to bring artificial intelligence to the rescue. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in:

Saturday, June 15, 2019

How Automation and Intelligence Blend with Design Innovation to Enhance the Experience of Modern IT

Transcript of a discussion on how advances in design enhance the total experience for IT operators, making usability a key ingredient of modern hybrid IT systems.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Innovator podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest in IT innovations.

Our next discussion focuses on how advances in design enhance the total experience for IT operators. Stay with us now as we hear about the general philosophy, modernization of design, and how new discrete best practices are making usability a key ingredient of modern hybrid IT systems.

To learn how, please join me now in welcoming Bryan Jacquot, Vice President and Chief Design Officer at Hewlett Packard Enterprise (HPE). Welcome, Bryan.

Bryan Jacquot: Thank you, Dana. It’s my pleasure to be here.

Gardner: Bryan, what are the drivers requiring change and innovation when it comes to the design of IT systems?

Design for speed

Jacquot: If I go back 15 to 20 years, people were deeply steeped in their given technology, whether it happened to be servers, networking, or storage. They would spend a lot of time in training, get certified, and have a specialized role.

What we are seeing much more frequently now is, number one, the skill set of our people in IT is raising up to higher levels in the infrastructure. We are not so much concerned with the lower-level details. Instead, it’s about solving business needs and helping customers, usually in the lines of business (LOBs). IT must help their customers do things faster, because the pace and the speed of change in every business today continues to accelerate.

With design, we are attempting to understand and embrace our customers where they are, but also, we want to help enable them to achieve their business needs and deliver the IT services that their customers are requiring in a more efficient, agile, and responsive manner.

Gardner: Bryan, because the addressable audience is expanding beyond pure IT administrators, what needs to happen to design now that we have more people involved?

Know your user 

Jacquot: The first thing you have to do is know who your user is. If you don’t know that, then any design work is going to fall short. And now the design work that systems at IT companies are delivering is not only delivered toward IT but also different contingents within their businesses. It might be developers who are in a LOB trying to create the next service or business application that enables their business to be successful.

Again, if we look back, the CIO or leaders in IT in the past would have chosen a given platform, whether a database to standardize on or an application server. Nowadays, that’s not what happens. Instead, the LOBs have choices. If they want to consume an open source project or use a service that someone else created, they have that choice.
Now IT is in the position of having to provide a service that is on par, able to move quickly and efficiently, and meets the needs of developers and LOBs. And that’s why it’s so important for design to expand the users we are targeting.

IT can no longer just be the people who used to do the maintaining of IT infrastructure; it now includes a secondary set of users who are consuming the resources and ultimately becoming the decision-makers.

In fact, recent IDC research talks about IT budgets and who controls more of the budget. In the last year or two, the pendulum has swung to the point where the LOBs are controlling the majority of the spend, even if IT is ultimately the one procuring resources or assets. The decision-making has shifted over to LOBs in many companies. And so, it becomes more and more imperative for IT to have solutions in place to meet those needs.
Learn How to Transform
The Traditional Datacenter
If we are going to serve that market as designers, we have to be aware of that, know who the ultimate users are, and make sure they are satisfied and able to do what they have to do to deliver what their businesses need.

Gardner: It wasn’t that long ago that IT was only competing with the previous version of whatever it is that they provided to their end users. But now, IT competes with the cloud offerings, Software as a service (SaaS) offerings, and open source solutions. You could also say that IT competes with the experience that consumers get in their homes, and so there are heightened expectations on usability.

Jacquot: Yes, it really has raised expectations, and that’s a good thing. IT is now looking around and saying, “Okay, for the LOBs we used to serve, it used to be, ‘Here is what you get, and don’t throw a fit.’” But that doesn’t really work anymore. Now IT has to provide business value to those LOBs, or they will vote with their dollars and choose something else.

Just as we’ve seen in the consumer space -- where things are getting more-and-more centered around the experience of the service -- that same thinking is moving into the enterprise. It raises what the enterprise traditionally does to a new level of the experience of what developers and LOBs really need. But the same could apply to researchers or other sets of users. These are the people trying to find the next cure for Alzheimer’s or enabling genetic testing of new medicines. These are not IT people -- they just need a simple infrastructure experience to run their experiments.

To do that they are going to choose a service that enables them to be as quick and efficient with their research as they possibly can be. It doesn’t matter for them if it’s in a big public cloud or if it’s in local IT -- as long as they are able to do it with the least amount of effort on their part. That’s a trend that we are certainly seeing. IT has to deliver services that meet the needs of those users where ever they are.

Gardner: Bryan, tell us about yourself. What does it take in terms of background, skills, and general understanding to be a Chief Design Officer in this new day and age, given these new requirements?

Drawn by design, to design 

Jacquot: There is a wide variety of backgrounds for people who have a similar title and role. In my particular case, I began as a software engineer; my undergraduate degree is in computer science. I began at HP working on the UNIX operating system (OS), down in the kernel of all things, about as far as you can get from where I am now.

One of the first projects I worked on at HP was deployment and OS installation mechanisms. We had gotten a bunch of errors and warnings during that process. I was just a kid out of college; I didn’t know what was going on. I kept asking questions: “Why do we have so many errors and warnings?” They were like, “Oh, that’s just the way it works.” I was like, “Well, why is that okay? Why are we doing it that way?”

The next OS release was the first one in ages that had no errors and warnings. I didn’t realize it at the time, but that’s where I started this passion for doing the right thing for the user and making sure that a user is able to understand what’s going on and how to be successful with their systems.
The next OS release was the first one in ages that had no errors and warnings. That's where I began this passion for doing the right thing for the user and making sure that a user is able to understand what is going on and how to be successful.

That progressed through the years, and I ended up continuing my passion for delivering on what our users’ needs are and how we can best enable them. Basically, that means not trying to jump too quickly to a solution, but first making sure that we understand the problems our users have. Then we can focus on innovating to deliver higher value to them, with a better understanding of what they need.

At that point, then I went back and earned my graduate degree in human-computer interaction with a focus on psychology, understanding human factors and how people think. That includes understanding how they use their working memory and how they process information, so we can build solutions that best align to how people naturally operate.

That’s one of the key things I found from my original background and then the most recent training. The best solutions we can build are the ones that fit as seamlessly as possible into the user’s hands, whether they are working with something digitally or physically.

For me, that was the combination that led to where I am now and being able to have successful delivery of various products and solutions -- offerings that are really focused on meeting the customers’ needs.

Agility arrives with speed 

Gardner: As an advocate for the user, and broadening the definition of who that user is when it comes to core IT services, what are the top challenges that those users now have? Are we dealing with complexity, with interfaces, and with logic? All the above? What are the latest problems that we are trying to solve?

Jacquot: It certainly can be both logic and complexity. Systems are getting more complex.

But, number one, from the customers I have talked to, the consistent overriding theme is they are under threat of being disrupted by somebody. And if they are not being disrupted by someone else, they are trying to disrupt themselves to prevent someone else from disrupting them. This is the case across all customers and across every industry.

And so, if they are in the mode where they have to be constantly pushing themselves -- pushing the boundaries and having to move fast -- then the overarching themes I am hearing about are speed and agility. That means removing as much work from what IT has to do as possible. Then they can focus their time and energy on the business problems, not on the IT scaffolding, foundation, and structure to support what they are trying to do.

Whether it’s in hospitals, where they are trying to deliver better patient care using medical records, or it’s in the finance industry, where they are trying to get the next trade done faster -- whatever the work happens to be, the focus is always about speed and agility.
And so, anything that we can build (application or user experience (UX)) for those users to help them be more efficient, are the things the drive the greatest degree of success.

Gardner: Given that design emphasis, it sounds a lot like the design of applications. But these aren’t necessarily applications. These are systems, platforms, and support products that may have even come together from mergers and acquisitions.

What’s the difference between designing an application, as a software developer, and designing an IT system or platform that often can come from the integration of multiple products?

Design to meet users’ needs 

Jacquot: I would argue that in the design process, the techniques, capabilities, and skills needed to solve the problems are actually the same, regardless of the type of product. The things that tend to change are who the users are and what they need. Those are the two key variables in the equation that are going to vary.

If you look at many of the startups out there today, they are delivering SaaS capabilities, whether it’s Uber and making transportation different, or Airbnb remaking the lodging experience to be simpler, easier, and more flexible. They are completely software based.

But there are also startups like Square, where they are making business transactions easier for startups. They also have hardware devices for enabling the card and chip readers for conducting transactions.

At the end of the day, the things that we build are just a byproduct of, “Okay, we have an understanding of the user. We know what we need to build to make them successful. Let’s figure out the right widget or gadget to meet that need.”

That can be a hardware system, like HPE Synergy, where we identified a need to be more flexible to compose and recompose IT resources on-demand. That platform didn’t exist two and a half years ago. If we could have done it only with software, we would have, but the software needed a new hardware platform to run on, so we created both.
These are all good examples of where we identified the business needs to make users more efficient. Now they no longer have to wait weeks or months to get access to a resource. With HPE Synergy they can access resources immediately.

Looking under the covers of Synergy, the HPE OneView platform and the Composer Card is what actually drives a lot of the innovation and makes composability possible, and it’s based on software. These are all good examples of where we identified the business needs to make users more efficient. Now they no longer have to wait weeks or months to get access to a resource, with HPE Synergy they can access and consume those resources immediately. That’s an example of an integrated system we have developed in order to deliver on a customer need.

Gardner: A lot of what goes on with composability and contextually aware applications nowadays uses data to develop inference, to anticipate the needs of a user, and provide them with the right information, not overload, so they can innovate and be creative.

How do you create a proper balance between context and overload? It seems to me that’s a very difficult sweet spot to get to.

Getting to know you, all about you

Jacquot: It definitely is. This is a challenge we have been attempting to address in my group for years. How do you get just the right amount of data without becoming overwhelming? That’s actually a really hard problem because it turns out our systems are incredibly complex. They have a lot of information. But knowing exactly what a given user is going to need at any point in time -- and not giving them anything more -- is a hard problem to solve.

As users are looking at screens, if you put too much information up there, then they can get overloaded. The visual search time that they will spend to find the information they care about, creates more chance of making an error.

Striking the right balance comes down to a couple of things. Number one, there is the initiative that folks in my group have begun driving that we talk about as Know Me, which means we know the user. What I mean by that is, not just that we understand the user, but when a user accesses our system, the system knows who they are; it knows them.
So, it knows the things that they tend to use more often. It knows the environment that they have, what constitutes the scale they are using, and what constitutes the depth of information they tend to go to. And using that along with machine learning (ML) to enhance the information we are providing them -- to make their experience richer -- is going to be the thing to pursue to make our systems even better.

And again, it’s not just knowing who they are. In the background, when we were designing the system, it’s more than just taking their preferences into account. I am talking about when they log in, the system knows it was “Dana”, for example, that once logged in. It knows that these are the things that are important to Dana, and it makes that experience richer because of that background and information we have.

Gardner: You have been doing this for a long time, and you have seen a lot of the psychology around innovation. But what have you personally learned about innovation? How do you even define innovation? It might be different than most other people.

Jacquot: Yes, it might be. In the places I have seen innovation the most, it is not like just having an epiphany. All of a sudden, I have the answer, it’s there in front of me, and we just need to go build it. I wish that were the case, but that doesn’t happen for me.

For me, it requires taking the time to understand the customer very well, as I mentioned earlier -- to the point of being able to empathize with them, where is the pain that they experience -- or the joy that they experience – it becomes something that I feel as well.

If you look at the definition of empathy, that’s what it means. It’s not just a fancy word of being empathetic and understanding. But it’s actually feeling the pain and the joy of the person you are empathizing with.

Once that is established, then comes the creativity, with the ability to explore ideas, try things, throw them out, and try again. You can start down that path to share ideas with your prospective users and get feedback on it.

First the mess, then the masterpiece 

I don’t get it right the first time. In fact, I expect to get a bunch of this wrong before I get it right.

If you were to do a Google search on “design” or “design thinking” and look at the pictures that come up, a lot of them look very orderly, and very orthodox. Depending on which one you see, you will ask some initial questions, do ideating and prototyping, and synthesis and gathering feedback, and so on.

But there is one thing that all those pictures miss; and that is as you are going through this process, and you get a better understanding, you take turns that you didn’t expect. You have to be willing to take those turns to get to the nugget of what’s possible, to get to the core of the potential of a solution you are innovating. So, it can get messy.
We don’t go in straight line. It’s curvy, it’s a squiggly line all over the place. We start by finding good places where things are resonating, and we continue to refine and iterate until we get to the point when we’ve got a foundation. Then we will go build and deliver on that -- and then the next squiggly, messy area starts up again in a continuous cycle that never ends.

Innovation looks messy and uncoordinated. It requires a lot of listening and understanding. And then the creative side comes in. We can brainstorm and explore. I really enjoy that side of it. But it has to start with understanding, and of not trying to be too rigid. [If you’re too rigid,] I think you would miss out on the opportunities that are there, but not as easy to spot.

Gardner: I love that idea of the journey from messiness to clarity and then productivity. Do you have any examples, Bryan, that would show a use-case that demonstrates that journey? Where at HPE have you made that journey?

Jacquot: I led the design team, and I was a chief technologist for HPE OneView during its early incubation, of getting it into a product and then releasing it to the market. There was one customer I remember specifically at a financial firm, and he was describing one of the tasks he had to do at 2 a.m. because that was the window in which he could make a change to the infrastructure without disrupting the business.
To hear him talk through that and knowing from the cognitive side that someone in that situation, if they are low on sleep, they are probably not very happy about being there, they are also going to be more prone to making errors. Their judgment is not going to be as clear. You put these factors together, and it was a miserable experience for him.

We went back and said, “Okay, we can make the system be able to perform these operations where it doesn’t require being offline and done in the middle of the night.”

That was an example of, through discovery of a pain point and hearing the things a customer is having to go through. As a result, we made a pretty dramatic change in the way we were addressing this issue for a particular user. But as we discussed it with other customers, he wasn’t the only one. This scenario wasn’t an anomaly; this was a pretty consistent thing.

Even though the clarity that he described in his situation was easy for us to grab a hold of, it was a common thing. The solution ended up being one of the key capabilities that we delivered as part of that platform, and it continues to expand today.

And that non-disruptive update feature was grounded in early-on research. It’s just one example of going from a squiggly to something that’s been very well-received.

Place process before products 

Another example came about differently, and with a different timescale, but it was also pretty impactful in HPE’s transformation. A few years ago, we were going through some separations, with the HPE software group and DXC, for example.

At the time, we didn’t have an offering in the hyperconverged infrastructure (HCI) market. HPE knew this was a place we needed to tackle. It was a big growth opportunity. So, a small team was put together to identify ways we could provide an HCI solution. And so, with the research we had done, we knew it was a better opportunity if we provided something that was simple and would appeal to the LOBs we talked about earlier.

Those LOBs might be a developer or a researcher, but they would want access to infrastructure quickly, without waiting for IT. They would want a self-service interface that enabled a simple way to get access to resources.

So, we started on this project. The senior leaders at the time gave us three months to build a solution. We rapidly took assets we had and began assembling them together into a good solution. It ultimately took us five months, not three, to introduce what was the HPE Hyper Converged 380 platform.
Now, if you go look on, that’s not a solution you are going to find today because we ultimately acquired SimpliVity, and that’s the product that is filling that need and that business area for us. The one that we made, the 380, was a short-term activity we did to get into the market.

Some of these projects that we engage in can include long research; we spend a couple of years understanding the users and refining, and prototyping and iterating. Other ones can be done on the shorter scale. You’ve got a few months to get something into market and start getting feedback, getting customers using it. Then you start iterating and driving from there, and that’s the one [HPE Hyper Converged 380 platform] was a really good example.

And we won several different innovation awards with that platform, even though it was created in a very tight timeline. The usability of it was really strong, and we got some good feedback as our entryway into the hyperconverged market.

Gardner: And other than awards, which are fantastic of course, what are some other metrics or indicators that you did it right? When people do design, and people use really good design, what do they get for it? How do you know it?

Get it right, true to your values 

Jacquot: Number one, it’s hugely important that if you aren’t getting business results, then something is wrong. If you design the right product and deliver it to the market, then good business results should follow.

The other part of it is we use various metrics internally. We are constantly following our products, and we can access the user success rates, the retention rates. If they are experiencing errors, we know what the ratios are. All those kinds of metrics and analytics are important, but those aren’t the number one thing that I would look at. The number one is the business results.

After a while, you can track things like brand loyalty, brand favorability, and net promoter score.

What I have been attracted to more-and-more recently, however, is the HPE values. We state that our mission is to improve the way people live and work. l will be honest, when we first started talking about that, I felt we were accomplishing a lot of great things but wasn’t exactly sure if they aligned to our mission.
We use various metrics internally. We are constantly following our products, and we can access the users' success rates, the retention rates. If they are experiencing errors, we know what the ratios are. But the number one metric is the business results.

Now, I look at how some of these examples are coming through, and what HPE customers are achieving – things like helping to combat human trafficking by finding pictures of people on the dark web and matching them with missing person cases using artificial intelligence (AI) and ML. There’s also the Alzheimer’s study and how we are enabling that massive study to try and find a cure for Alzheimer’s.

Those are some really positive things that are becoming metrics that I care a lot about. I love seeing those stories and being a part of the team and the company that’s making those things possible. Because ultimately, if we are going to spend our time and energy designing great solutions, the outcome should affect all of those areas including doing good for the world.

Gardner: In closing out, let’s look to the future. You mentioned AI. It seems to me that we’re trying to find another balance here in letting the machines do what they do best -- and then delegating to the people what they do best, which is what machines can’t do. Is part of what you see in your design role at HPE going down that path of finding that balance? How will AI impact the way products are used and people interact with them in the future?

Expand what’s humanly possible

Jacquot: So, the ethics of design, I think, is a really rich topic. That’s a discussion all of itself. But I think the question specifically around AI and ML, is that there are things that you look at that could be possible. Some have experimented by putting bots that watch traffic on Twitter, and they start responding. And they often degenerate to a pretty bad place.

The whole AI and ML field is one where ethics are involved and require putting the right guardrails in place. That’s something we as an industry and as a population are going to have to watch closely, because it’s clear that just by nature, not everything goes in a positive direction.

And I think we are trying to use it in a way to make the humans better in what we are doing and making us more efficient.
One example I like to use is the autonomous vehicle, which is interesting to me because if you look at it from a human behind the wheel, we can see straight ahead. Or we can look in the rear-view mirror or the side mirrors, but we can basically see in one direction with a little bit of peripheral vision.

We can hear things in auditory, we can hear in omni-direction, but our senses are limited. On the other hand, an autonomous vehicle can look in 360 degrees, it’s empowered with it, it can use things like ultrasound and infrared to detect beyond what humans can see at night, for example, seeing animals on the side roads.

AI and ML in a vehicle are much more capable, and they don’t fatigue, they don’t get distracted. They don’t get angry and don’t get road rage. So, there are a lot of benefits that we as the users of those vehicles can benefit from, as long as we put the right guardrails in place that will actually make humans better at what they are doing and safer than when we are actually in charge behind the wheel.

We will use ML and AI to empower our users, whether it be developers, or admin to see better what’s happening. I think a great example of that is what we are doing with HPE InfoSight.

When we are ingesting massive amounts of data from our system and then using that to make better predictions and ensure making things happen when it needs to happen and making sure that if there is something that’s going wrong – it can be detected and addressed before it even becomes a problem and impacts business continuity. And that’s just one of the ways that we are using AI and ML. But I would say the big overriding thing with AI and ML is using it in a way to augment what we can do and making sure that ethics are first and foremost considered because it’s clear, just left on their own, things could go in directions that we probably don’t want them to.

Gardner: I’m afraid we will have to leave it there. We have been exploring how advances in design are enhancing the total experience for IT operators and more and more people inside of enterprises. And we’ve learned how the general philosophy and some best practices are making usability a key ingredient of modern hybrid IT systems.

So please join me in thanking our guest, Bryan Jacquot, Vice President and Chief Design Officer at HPE. Thank you so much, Bryan.

Jacquot: Thank you, Dana. It’s been my pleasure.

Gardner: And a big thank you as well to our audience for joining this BriefingsDirect Voice of the Innovator interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored discussions.

Thanks again for listening, please pass this along to your IT community, and don’t forget to come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how advances in design enhance the total experience for IT operators, making usability a key ingredient of modern hybrid IT systems. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in: