Showing posts with label economics. Show all posts
Showing posts with label economics. Show all posts

Thursday, June 20, 2019

Qlik’s CTO on Why the Cloud Data Diaspora Forces Businesses to Rethink their Analytics Strategies


Transcript of a discussion on why new ways of thinking are demanded if comprehensive analysis of relevant data can become practical across a multi- and hybrid-cloud deployments world.
 
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Qlik.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Our next business intelligence (BI) trends discussion explores the impact of dispersed data in a multicloud world.

Gardner
Gaining control over far-flung and disparate data has been a decades’ old struggle, but now as hybrid and public clouds join the mix of legacy and distributed digital architectures, new ways of thinking are demanded if comprehensive analysis of relevant data is going to become practical.

Stay with us now as we examine the latest strategies for making the best use of data integration, data catalogs and indices, as well highly portable data analytics platforms.

To learn more about closing the analysis gap between data and multiple -- and most probably changeable -- cloud models, we are now joined by Mike Potter, Chief Technology Officer (CTO) at Qlik. Welcome, Mike.

Mike Potter: Hi, I’m glad to be here.

Gardner: Mike, businesses are adopting cloud computing for very good reasons. The growth over the past decade has been strong and accelerating. What have been some of the -- if not unintentional -- complicating factors for gaining a comprehensive data analysis strategy amid this cloud computing complexity?

Potter: The biggest thing is recognizing that it’s all about where data lives and where it's being created. Obviously, historically most data have been generated on-premises. So, there is a strong pull there, but you are seeing more and more cases now where data is born in the cloud and spends its whole lifetime in the cloud.

Potter
And so now the use cases are different because you have a combination of those two worlds, on-premises and cloud. To add further complexity, data is now being born in different cloud providers. Not only are you dealing with having some data and legacy systems on-premises, but you may have to reconcile that you have data in Amazon, Google, or Microsoft.

Our whole strategy around multicloud and hybrid cloud architectures is being able to deploy Qlik where the data lives. It allows you to leave the data where it is, but gives you options so that if you need to move the data, we can support the use cases on-premises to cloud or across cloud providers.

Gardner: And you haven’t just put on the patina of cloud-first or software as a service (Saas) -first. You have rearchitected and repositioned a lot of what your products and technologies do. Tell us about being “SaaS-first” as a strategy.

Scaling the clouds


Potter: We began our journey about 2.5 years ago, when we started converting our monolith architecture into a microservices-based architecture. That journey struck to the core of the whole product.

Qlik’s heritage was a Windows Server architecture. We had to rethink a lot of things. As part of that we made a big bet 1.5 years ago on containerization, using Docker and Kubernetes. And that’s really paid off for us. It has put us ahead of the technology curve in many respects. When we did our initial release of our multicloud product in June 2018, I had conversations with customers who didn’t know what Kubernetes was.

One enterprise customer had an infrastructure team who had set up an environment to provision Kubernetes cluster environments, but we were only the second vendor that required one, so we were ahead of the game quite a bit.

Gardner: How does using a managed container platform like Kubernetes help you in a multicloud world?

https://www.qlik.com/us
Potter: The single biggest thing is it allows you to scale and manage workloads at a much finer grain of detail through auto-scaling capabilities provided by orchestration environments such as Kubernetes.

More importantly it allows you to manage your costs. One of the biggest advantages of a microservice-based architecture is that you can scale up and scale down to a much finer grain. For most on-premises, server-based, monolith architectures, customers have to buy infrastructure for peak levels of workload. We can scale up and scale down those workloads -- basically on the fly -- and give them a lot more control over their infrastructure budget. It allows them to meet the needs of their customers when they need it.

Gardner: Another aspect of the cloud evolution over the past decade is that no one enterprise is like any other. They have usually adopted cloud in different ways.

Has Qlik’s multicloud analytics approach come with the advantage of being able to deal with any of those different topologies, enterprise by enterprise, to help them each uniquely attain more of a total data strategy?

Potter: Yes, I think so. The thing we want to focus on is, rather than dictate the cloud strategy – often the choice of our competitors -- we want to support your cloud strategy as you need it. We recognize that a customer may not want to be on just one cloud provider. They don’t want to lock themselves in. And so we need to accommodate that.

There may be very valid reasons why they are regionalized, from a data sovereignty perspective, and we want to accommodate that.

There will always be on-premises requirements, and we want to accommodate that.

The reality is that, for quite a while, you are not going to see as much convergence around cloud providers as you are going to see around microservices architectures, containers, and the way they are managed and orchestrated.
You are not going to see as much convergence around cloud providers as you are going to see around microservices architectures, containers, and the way they are managed and orchestrated.

Gardner: And there is another variable in the mix over the next years -- and that’s the edge. We have an uncharted, immature environment at the edge. But already we are hearing that a private cloud at the edge is entirely feasible. Perhaps containers will be working there.

At Qlik, how are you anticipating edge computing, and how will that jibe with the multicloud approach?

Running at the edge


Potter: One of the key features of our platform architecture is not only can we run on-premises or in any cloud at scale, we can run on an edge device. We can take our core analytics engine and deploy it on a device or machine running at the edge. This enables a new opportunity, which is taking analytics itself to the edge.

A lot of Internet of Things (IoT) implementations are geared toward collecting data at the sensor, transferring it to a central location to be processed, and then analyzing it all there. What we want to do is push the analytics problem out to the edge so that the analytic data feeds can be processed at the edge. Then only the analytics events are transmitted back for central processing, which obviously has a huge impact from a data-scale perspective.

But more importantly, it creates a new opportunity to have the analytic context be very immediate in the field, where the point of occurrence is. So if you are sitting there on a sensor and you are doing analytics on the sensor, not only can you benefit at the sensor, you can send the analytics data back to the central point, where it can be analyzed as well.

Gardner: It’s auspicious, the way that Qlik’s catalog, indexing, and abstracting out the information about where data is approach can now be used really well in an edge environment.


Potter: Most definitely. Our entire data strategy is intricately linked with our architectural strategy in that respect, yes.

Gardner: Analytics and being data-driven across an organization is the way of the future. It makes sense to not cede that core competency of being good at analytics to a cloud provider or to a vendor. The people, process, and tribal knowledge about analytics seems essential.

https://www.qlik.com/us
Do you agree with that, and how does Qlik’s strategy align with keeping the core competency of analytics of, by, and for each and every enterprise?

Potter: Analytics is a specialization organizationally within all of our customers, and that’s not going to go away. What we want to do is parlay that into a broader discussion. So our focus is enabling three key strategies now.

It's about enabling the analytics strategy, as we always have, but broadening the conversation to enabling the data strategy. More importantly, we want to close the organizational, technological, and priority gaps to foster creating an integrated data and analytics strategy.

By doing that, we can create what I describe as a raw-to-ready analytics platform based on trust, because we own the process of the data from source to analysis, and that not only makes the analytics better, it promotes the third part of our strategy, which is around data literacy. That’s about creating a trusted environment in which people can interact with their data and do the analysis that they want to do without having to be data scientists or data experts.

So owning that whole end-to-end architecture is what we are striving to reach.

Gardner: As we have seen in other technology maturation trend curves, applying automation to the problem frees up the larger democratization process. More people can consume these services. How does automation work in the next few years when it comes to analytics? Are we going to start to see more artificial intelligence (AI) applied to the problem?

Automated, intelligent analytics


Potter: Automating those environments is an inevitability, not only from the standpoint of how the data is collected, but in how the data is pushed through a data operations process. More importantly, automating enables on the other end, too, by embedding artificial and machine learning (ML) techniques all the way along that value chain -- from the point of source to the point of consumption.

Gardner: How does AI play a role in the automation and the capability to leverage data across the entire organization?

Potter: How we perform analytics within an analytic system is going to evolve. It’s going to be more conversational in nature, and less about just consuming a dashboard and looking for an insight into a visualization.

The analytics system itself will be an active member of that process, where the conversation is not only with the analytics system but the analytics system itself can initiate the conversation by identifying insights based on context and on other feeds. Those can come from the collective intelligence of the people you work with, or even from people not involved in the process.
The analytics system itself will be an active member of that process, where the conversation is not only with the analytics system but it will initiate the conversation by identifying insights based on context and other feeds.

Gardner: I have been at some events where robotic process automation (RPA) has been a key topic. It seems to me that there is this welling opportunity to use AI with RPA, but it’s a separate track from what's going on with BI, analytics, and the traditional data warehouse approach.

Do you see an opportunity for what’s going on with AI and use of RPA? Can what Qlik is doing with the analytics and data assimilation problem come together with RPA? Would a process be able to leverage analytic information, and vice versa?

Potter: It gets back to the idea of pushing analytics to the edge, because an edge isn’t just a device-level integration. It can be the edge of a process. It can be the edge of not only a human process, but an automated business process. The notion of being able to embed analytics deep into those processes is already being done. Process analytics is an important field.

But the newer idea is that analytics is in service of the process, as opposed to the other way around. The world is getting away from analytics being a separate activity, done by a separate group, and as a separate act. It is as commonplace as getting a text message, right?

Gardner: For the organization to get to that nirvana of total analytics as a common strategy, this needs to be part of what the IT organization is doing, with full stack architecture and evolution. So AIOps and DataOps also getting closer over time.

How does DataOps in your thinking relate to what the larger IT enterprise architects are doing, and why should they be thinking about data more?

Optimizing data pipelines


Potter: That’s a really good question. From my perspective, when I get a chance to talk to data teams, I ask a simple question: “You have this data lake. Is it meeting the analytic requirements of your organization?”

https://www.qlik.com/us
And often I don’t get very good answers. And a big reason why is because what motivates and prioritizes the data team is the storage and management of data, not necessarily the analytics. And often those priorities conflict with the priorities of the analytics team.

What we are trying to do with the Qlik integrated data and analytic strategy is to create data pipelines optimized for analytics, and data operations optimized for analytics. And our investments and our acquisitions in Attunity and Podium are about taking that process and focusing on the raw-to-ready part of the data operations.

Gardner: Mike, we have been talking at a fairly abstract level, but can you share any use cases where leading-edge organizations recognize the intrinsic relationship between DataOps and enterprise architecture? Can you describe some examples or use cases where they get it, and what it gets for them?

Potter: One of our very large enterprise customers deals in medical devices and related products and services. They realized an essential need to have an integrated strategy. And one of the challenges they have, like most organizations, is how to not only overcome the technology part but also the organizational, cultural, and change-management aspects as well.

They recognized the business has a need for data, and IT has data. If you intersect that, how much of that data is actually a good fit? How much data does IT have that isn't needed? How much of the remaining need is unfulfilled by IT? That's the problem we need to close in on.

Gardner: Businesses need to be thinking at the C-suite level about outcomes. Are there some examples where you can tie together such strategic business outcomes back to the total data approach, to using enterprise architecture and DataOps?

Data decision-making, democratized


Potter: The biggest ones center on end-to-end governance of data for analytics, the ability to understand where the data comes from, and building trust in the data inside the organization so that decisions can be made, and those decisions have traceability back to results.

The other aspect of building such an integrated system is a total cost of ownership (TCO) opportunity, because you are no longer expending energy managing data that isn't relevant to adding value to the organization. You can make a lot more intelligent choices about how you use data and how you actually measure the impact that the data can have.

Gardner: On the topic of data literacy, how do you see the behavior of an organization -- the culture of an organization -- shifting? How do we get the chicken-and-egg relationship going between the data services that provide analytics and the consumers to start a virtuous positive adoption pattern?
One of the biggest puzzles a lot of IT organizations face is around adoption and utilization. They build a data lake and they don't know why people aren't using it.

Potter: One of the biggest puzzles a lot of IT organizations face is around adoption and utilization. They build a data lake and they don't know why people aren’t using it.

For me, there are a couple of elements to the problem. One is what I call data elitism. When you think about data literacy and you compare it to literacy in the pre-industrial age, the people who had the books were the people who were rich and had power. So church and state, that kind of thing. It wasn't until technology created, through the printing press, a democratization of literacy that you started to see interesting behavior. Those with the books, those with the power, tried to subvert reading in the general population. They made it illegal. Some argue that the French Revolution was, in part, caused by rising rates of literacy.

If you flash-forward this analogy to today in data literacy, you have the same notion of elitism. Data is only allowed to be accessed by the senior levels of the organization. It can only be controlled by IT.

Ironically, the most data-enabled organizations are typically oriented to the Millennials or younger users. But they are in the wrong part of the organizational chart to actually take advantage of that. They are not allowed to see the data they could use to do their jobs.

The opportunity from a democratization-of-data perspective is understanding the value of data for every individual and allowing that data to be made available in a trusted environment. That’s where this end-to-end process becomes so important.

Gardner: How do we make the economics of analytics an accelerant to that adoption and the democratization of data? I’ll use another historical analogy, the Model T and assembly line. They didn't sell Model Ts nearly to the degree they thought until they paid their own people enough to afford one.

Is there a way of looking at that and saying, “Okay, we need to create an economic environment where analytics is paid for-on-demand, it's fit-for-purpose, it's consumption-oriented.” Wouldn’t that market effect help accelerate the adoption of analytics as a total enterprise cultural activity?

Think positive data culture


Potter: That’s a really interesting thought. The consumerization of analytics is a product of accessibility and of cost. When you build a positive data culture in an organization, data needs to be as readily accessible as email. From that perspective, turning it into a cost model might be a way to accomplish it. It's about a combination of leadership, of just going there and making occur at the grassroots level, where the value it presents is clear.

And, again, I reemphasize this idea of needing a positive data culture.

Gardner: Any added practical advice for organizations? We have been looking at what will be happening and what to anticipate. But what should an enterprise do now to be in an advantageous position to execute a “positive data culture”?

Potter: The simplest advice is to know that technology is not the biggest hurdle; it's change management, culture, and leadership. When you think about the data strategy integrated with the analytics strategy, that means looking at how you are organized and prioritized around that combined strategy.

Finally, when it comes to a data literacy strategy, define how you are going to enable your organization to see data as a positive asset to doing their jobs. The leadership should understand that data translates into value and results. It's a tool, not a weapon.

Gardner: I’m afraid we’ll have to leave it there. You have been listening to a sponsored BriefingsDirect discussion on the impact of dispersed data in a multicloud world. And we have learned about the latest strategies for making the best use of data across an entire organization -- technically, in process terms, as well as culturally.

So a big thank you to our guest, Mike Potter, Chief Technology Officer at Qlik.


Potter: Thank you. It was great to be here.

Gardner: And thank you as well to our audience for joining this BriefingsDirect business intelligence trends discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Qlik-sponsored BriefingsDirect interviews.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Qlik.
 
Transcript of a discussion on why new ways of thinking are demanded if comprehensive analysis of relevant data can become practical across a multi- and hybrid-cloud deployments world. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in:

Saturday, June 15, 2019

How Automation and Intelligence Blend with Design Innovation to Enhance the Experience of Modern IT


Transcript of a discussion on how advances in design enhance the total experience for IT operators, making usability a key ingredient of modern hybrid IT systems.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Innovator podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest in IT innovations.

Gardner
Our next discussion focuses on how advances in design enhance the total experience for IT operators. Stay with us now as we hear about the general philosophy, modernization of design, and how new discrete best practices are making usability a key ingredient of modern hybrid IT systems.

To learn how, please join me now in welcoming Bryan Jacquot, Vice President and Chief Design Officer at Hewlett Packard Enterprise (HPE). Welcome, Bryan.

Bryan Jacquot: Thank you, Dana. It’s my pleasure to be here.

Gardner: Bryan, what are the drivers requiring change and innovation when it comes to the design of IT systems?

Design for speed

Jacquot: If I go back 15 to 20 years, people were deeply steeped in their given technology, whether it happened to be servers, networking, or storage. They would spend a lot of time in training, get certified, and have a specialized role.

Jacquot
What we are seeing much more frequently now is, number one, the skill set of our people in IT is raising up to higher levels in the infrastructure. We are not so much concerned with the lower-level details. Instead, it’s about solving business needs and helping customers, usually in the lines of business (LOBs). IT must help their customers do things faster, because the pace and the speed of change in every business today continues to accelerate.

With design, we are attempting to understand and embrace our customers where they are, but also, we want to help enable them to achieve their business needs and deliver the IT services that their customers are requiring in a more efficient, agile, and responsive manner.

Gardner: Bryan, because the addressable audience is expanding beyond pure IT administrators, what needs to happen to design now that we have more people involved?

Know your user 

Jacquot: The first thing you have to do is know who your user is. If you don’t know that, then any design work is going to fall short. And now the design work that systems at IT companies are delivering is not only delivered toward IT but also different contingents within their businesses. It might be developers who are in a LOB trying to create the next service or business application that enables their business to be successful.

Again, if we look back, the CIO or leaders in IT in the past would have chosen a given platform, whether a database to standardize on or an application server. Nowadays, that’s not what happens. Instead, the LOBs have choices. If they want to consume an open source project or use a service that someone else created, they have that choice.

https://www.hpe.com/us/en/home.html
Now IT is in the position of having to provide a service that is on par, able to move quickly and efficiently, and meets the needs of developers and LOBs. And that’s why it’s so important for design to expand the users we are targeting.

IT can no longer just be the people who used to do the maintaining of IT infrastructure; it now includes a secondary set of users who are consuming the resources and ultimately becoming the decision-makers.

In fact, recent IDC research talks about IT budgets and who controls more of the budget. In the last year or two, the pendulum has swung to the point where the LOBs are controlling the majority of the spend, even if IT is ultimately the one procuring resources or assets. The decision-making has shifted over to LOBs in many companies. And so, it becomes more and more imperative for IT to have solutions in place to meet those needs.
Learn How to Transform
The Traditional Datacenter
If we are going to serve that market as designers, we have to be aware of that, know who the ultimate users are, and make sure they are satisfied and able to do what they have to do to deliver what their businesses need.

Gardner: It wasn’t that long ago that IT was only competing with the previous version of whatever it is that they provided to their end users. But now, IT competes with the cloud offerings, Software as a service (SaaS) offerings, and open source solutions. You could also say that IT competes with the experience that consumers get in their homes, and so there are heightened expectations on usability.

Jacquot: Yes, it really has raised expectations, and that’s a good thing. IT is now looking around and saying, “Okay, for the LOBs we used to serve, it used to be, ‘Here is what you get, and don’t throw a fit.’” But that doesn’t really work anymore. Now IT has to provide business value to those LOBs, or they will vote with their dollars and choose something else.

Just as we’ve seen in the consumer space -- where things are getting more-and-more centered around the experience of the service -- that same thinking is moving into the enterprise. It raises what the enterprise traditionally does to a new level of the experience of what developers and LOBs really need. But the same could apply to researchers or other sets of users. These are the people trying to find the next cure for Alzheimer’s or enabling genetic testing of new medicines. These are not IT people -- they just need a simple infrastructure experience to run their experiments.

To do that they are going to choose a service that enables them to be as quick and efficient with their research as they possibly can be. It doesn’t matter for them if it’s in a big public cloud or if it’s in local IT -- as long as they are able to do it with the least amount of effort on their part. That’s a trend that we are certainly seeing. IT has to deliver services that meet the needs of those users where ever they are.

Gardner: Bryan, tell us about yourself. What does it take in terms of background, skills, and general understanding to be a Chief Design Officer in this new day and age, given these new requirements?

Drawn by design, to design 

Jacquot: There is a wide variety of backgrounds for people who have a similar title and role. In my particular case, I began as a software engineer; my undergraduate degree is in computer science. I began at HP working on the UNIX operating system (OS), down in the kernel of all things, about as far as you can get from where I am now.

One of the first projects I worked on at HP was deployment and OS installation mechanisms. We had gotten a bunch of errors and warnings during that process. I was just a kid out of college; I didn’t know what was going on. I kept asking questions: “Why do we have so many errors and warnings?” They were like, “Oh, that’s just the way it works.” I was like, “Well, why is that okay? Why are we doing it that way?”

The next OS release was the first one in ages that had no errors and warnings. I didn’t realize it at the time, but that’s where I started this passion for doing the right thing for the user and making sure that a user is able to understand what’s going on and how to be successful with their systems.
The next OS release was the first one in ages that had no errors and warnings. That's where I began this passion for doing the right thing for the user and making sure that a user is able to understand what is going on and how to be successful.

That progressed through the years, and I ended up continuing my passion for delivering on what our users’ needs are and how we can best enable them. Basically, that means not trying to jump too quickly to a solution, but first making sure that we understand the problems our users have. Then we can focus on innovating to deliver higher value to them, with a better understanding of what they need.

At that point, then I went back and earned my graduate degree in human-computer interaction with a focus on psychology, understanding human factors and how people think. That includes understanding how they use their working memory and how they process information, so we can build solutions that best align to how people naturally operate.

That’s one of the key things I found from my original background and then the most recent training. The best solutions we can build are the ones that fit as seamlessly as possible into the user’s hands, whether they are working with something digitally or physically.

For me, that was the combination that led to where I am now and being able to have successful delivery of various products and solutions -- offerings that are really focused on meeting the customers’ needs.

Agility arrives with speed 

Gardner: As an advocate for the user, and broadening the definition of who that user is when it comes to core IT services, what are the top challenges that those users now have? Are we dealing with complexity, with interfaces, and with logic? All the above? What are the latest problems that we are trying to solve?

Jacquot: It certainly can be both logic and complexity. Systems are getting more complex.

But, number one, from the customers I have talked to, the consistent overriding theme is they are under threat of being disrupted by somebody. And if they are not being disrupted by someone else, they are trying to disrupt themselves to prevent someone else from disrupting them. This is the case across all customers and across every industry.

https://www.hpe.com/us/en/home.html

And so, if they are in the mode where they have to be constantly pushing themselves -- pushing the boundaries and having to move fast -- then the overarching themes I am hearing about are speed and agility. That means removing as much work from what IT has to do as possible. Then they can focus their time and energy on the business problems, not on the IT scaffolding, foundation, and structure to support what they are trying to do.

Whether it’s in hospitals, where they are trying to deliver better patient care using medical records, or it’s in the finance industry, where they are trying to get the next trade done faster -- whatever the work happens to be, the focus is always about speed and agility.
And so, anything that we can build (application or user experience (UX)) for those users to help them be more efficient, are the things the drive the greatest degree of success.

Gardner: Given that design emphasis, it sounds a lot like the design of applications. But these aren’t necessarily applications. These are systems, platforms, and support products that may have even come together from mergers and acquisitions.

What’s the difference between designing an application, as a software developer, and designing an IT system or platform that often can come from the integration of multiple products?

Design to meet users’ needs 

Jacquot: I would argue that in the design process, the techniques, capabilities, and skills needed to solve the problems are actually the same, regardless of the type of product. The things that tend to change are who the users are and what they need. Those are the two key variables in the equation that are going to vary.

If you look at many of the startups out there today, they are delivering SaaS capabilities, whether it’s Uber and making transportation different, or Airbnb remaking the lodging experience to be simpler, easier, and more flexible. They are completely software based.

But there are also startups like Square, where they are making business transactions easier for startups. They also have hardware devices for enabling the card and chip readers for conducting transactions.

At the end of the day, the things that we build are just a byproduct of, “Okay, we have an understanding of the user. We know what we need to build to make them successful. Let’s figure out the right widget or gadget to meet that need.”

That can be a hardware system, like HPE Synergy, where we identified a need to be more flexible to compose and recompose IT resources on-demand. That platform didn’t exist two and a half years ago. If we could have done it only with software, we would have, but the software needed a new hardware platform to run on, so we created both.
These are all good examples of where we identified the business needs to make users more efficient. Now they no longer have to wait weeks or months to get access to a resource. With HPE Synergy they can access resources immediately.

Looking under the covers of Synergy, the HPE OneView platform and the Composer Card is what actually drives a lot of the innovation and makes composability possible, and it’s based on software. These are all good examples of where we identified the business needs to make users more efficient. Now they no longer have to wait weeks or months to get access to a resource, with HPE Synergy they can access and consume those resources immediately. That’s an example of an integrated system we have developed in order to deliver on a customer need.

Gardner: A lot of what goes on with composability and contextually aware applications nowadays uses data to develop inference, to anticipate the needs of a user, and provide them with the right information, not overload, so they can innovate and be creative.

How do you create a proper balance between context and overload? It seems to me that’s a very difficult sweet spot to get to.

Getting to know you, all about you

Jacquot: It definitely is. This is a challenge we have been attempting to address in my group for years. How do you get just the right amount of data without becoming overwhelming? That’s actually a really hard problem because it turns out our systems are incredibly complex. They have a lot of information. But knowing exactly what a given user is going to need at any point in time -- and not giving them anything more -- is a hard problem to solve.

As users are looking at screens, if you put too much information up there, then they can get overloaded. The visual search time that they will spend to find the information they care about, creates more chance of making an error.

Striking the right balance comes down to a couple of things. Number one, there is the initiative that folks in my group have begun driving that we talk about as Know Me, which means we know the user. What I mean by that is, not just that we understand the user, but when a user accesses our system, the system knows who they are; it knows them.

https://www.hpe.com/us/en/home.html
So, it knows the things that they tend to use more often. It knows the environment that they have, what constitutes the scale they are using, and what constitutes the depth of information they tend to go to. And using that along with machine learning (ML) to enhance the information we are providing them -- to make their experience richer -- is going to be the thing to pursue to make our systems even better.

And again, it’s not just knowing who they are. In the background, when we were designing the system, it’s more than just taking their preferences into account. I am talking about when they log in, the system knows it was “Dana”, for example, that once logged in. It knows that these are the things that are important to Dana, and it makes that experience richer because of that background and information we have.

Gardner: You have been doing this for a long time, and you have seen a lot of the psychology around innovation. But what have you personally learned about innovation? How do you even define innovation? It might be different than most other people.


Jacquot: Yes, it might be. In the places I have seen innovation the most, it is not like just having an epiphany. All of a sudden, I have the answer, it’s there in front of me, and we just need to go build it. I wish that were the case, but that doesn’t happen for me.

For me, it requires taking the time to understand the customer very well, as I mentioned earlier -- to the point of being able to empathize with them, where is the pain that they experience -- or the joy that they experience – it becomes something that I feel as well.

If you look at the definition of empathy, that’s what it means. It’s not just a fancy word of being empathetic and understanding. But it’s actually feeling the pain and the joy of the person you are empathizing with.

Once that is established, then comes the creativity, with the ability to explore ideas, try things, throw them out, and try again. You can start down that path to share ideas with your prospective users and get feedback on it.

First the mess, then the masterpiece 

I don’t get it right the first time. In fact, I expect to get a bunch of this wrong before I get it right.

If you were to do a Google search on “design” or “design thinking” and look at the pictures that come up, a lot of them look very orderly, and very orthodox. Depending on which one you see, you will ask some initial questions, do ideating and prototyping, and synthesis and gathering feedback, and so on.

But there is one thing that all those pictures miss; and that is as you are going through this process, and you get a better understanding, you take turns that you didn’t expect. You have to be willing to take those turns to get to the nugget of what’s possible, to get to the core of the potential of a solution you are innovating. So, it can get messy.

https://www.hpe.com/us/en/home.html
We don’t go in straight line. It’s curvy, it’s a squiggly line all over the place. We start by finding good places where things are resonating, and we continue to refine and iterate until we get to the point when we’ve got a foundation. Then we will go build and deliver on that -- and then the next squiggly, messy area starts up again in a continuous cycle that never ends.

Innovation looks messy and uncoordinated. It requires a lot of listening and understanding. And then the creative side comes in. We can brainstorm and explore. I really enjoy that side of it. But it has to start with understanding, and of not trying to be too rigid. [If you’re too rigid,] I think you would miss out on the opportunities that are there, but not as easy to spot.

Gardner: I love that idea of the journey from messiness to clarity and then productivity. Do you have any examples, Bryan, that would show a use-case that demonstrates that journey? Where at HPE have you made that journey?

Jacquot: I led the design team, and I was a chief technologist for HPE OneView during its early incubation, of getting it into a product and then releasing it to the market. There was one customer I remember specifically at a financial firm, and he was describing one of the tasks he had to do at 2 a.m. because that was the window in which he could make a change to the infrastructure without disrupting the business.
To hear him talk through that and knowing from the cognitive side that someone in that situation, if they are low on sleep, they are probably not very happy about being there, they are also going to be more prone to making errors. Their judgment is not going to be as clear. You put these factors together, and it was a miserable experience for him.

We went back and said, “Okay, we can make the system be able to perform these operations where it doesn’t require being offline and done in the middle of the night.”

That was an example of, through discovery of a pain point and hearing the things a customer is having to go through. As a result, we made a pretty dramatic change in the way we were addressing this issue for a particular user. But as we discussed it with other customers, he wasn’t the only one. This scenario wasn’t an anomaly; this was a pretty consistent thing.

Even though the clarity that he described in his situation was easy for us to grab a hold of, it was a common thing. The solution ended up being one of the key capabilities that we delivered as part of that platform, and it continues to expand today.

And that non-disruptive update feature was grounded in early-on research. It’s just one example of going from a squiggly to something that’s been very well-received.

Place process before products 

Another example came about differently, and with a different timescale, but it was also pretty impactful in HPE’s transformation. A few years ago, we were going through some separations, with the HPE software group and DXC, for example.

At the time, we didn’t have an offering in the hyperconverged infrastructure (HCI) market. HPE knew this was a place we needed to tackle. It was a big growth opportunity. So, a small team was put together to identify ways we could provide an HCI solution. And so, with the research we had done, we knew it was a better opportunity if we provided something that was simple and would appeal to the LOBs we talked about earlier.

Those LOBs might be a developer or a researcher, but they would want access to infrastructure quickly, without waiting for IT. They would want a self-service interface that enabled a simple way to get access to resources.

So, we started on this project. The senior leaders at the time gave us three months to build a solution. We rapidly took assets we had and began assembling them together into a good solution. It ultimately took us five months, not three, to introduce what was the HPE Hyper Converged 380 platform.

https://www.hpe.com/us/en/home.html
Now, if you go look on hpe.com, that’s not a solution you are going to find today because we ultimately acquired SimpliVity, and that’s the product that is filling that need and that business area for us. The one that we made, the 380, was a short-term activity we did to get into the market.

Some of these projects that we engage in can include long research; we spend a couple of years understanding the users and refining, and prototyping and iterating. Other ones can be done on the shorter scale. You’ve got a few months to get something into market and start getting feedback, getting customers using it. Then you start iterating and driving from there, and that’s the one [HPE Hyper Converged 380 platform] was a really good example.

And we won several different innovation awards with that platform, even though it was created in a very tight timeline. The usability of it was really strong, and we got some good feedback as our entryway into the hyperconverged market.

Gardner: And other than awards, which are fantastic of course, what are some other metrics or indicators that you did it right? When people do design, and people use really good design, what do they get for it? How do you know it?

Get it right, true to your values 

Jacquot: Number one, it’s hugely important that if you aren’t getting business results, then something is wrong. If you design the right product and deliver it to the market, then good business results should follow.

The other part of it is we use various metrics internally. We are constantly following our products, and we can access the user success rates, the retention rates. If they are experiencing errors, we know what the ratios are. All those kinds of metrics and analytics are important, but those aren’t the number one thing that I would look at. The number one is the business results.

After a while, you can track things like brand loyalty, brand favorability, and net promoter score.

What I have been attracted to more-and-more recently, however, is the HPE values. We state that our mission is to improve the way people live and work. l will be honest, when we first started talking about that, I felt we were accomplishing a lot of great things but wasn’t exactly sure if they aligned to our mission.
We use various metrics internally. We are constantly following our products, and we can access the users' success rates, the retention rates. If they are experiencing errors, we know what the ratios are. But the number one metric is the business results.

Now, I look at how some of these examples are coming through, and what HPE customers are achieving – things like helping to combat human trafficking by finding pictures of people on the dark web and matching them with missing person cases using artificial intelligence (AI) and ML. There’s also the Alzheimer’s study and how we are enabling that massive study to try and find a cure for Alzheimer’s.

Those are some really positive things that are becoming metrics that I care a lot about. I love seeing those stories and being a part of the team and the company that’s making those things possible. Because ultimately, if we are going to spend our time and energy designing great solutions, the outcome should affect all of those areas including doing good for the world.

Gardner: In closing out, let’s look to the future. You mentioned AI. It seems to me that we’re trying to find another balance here in letting the machines do what they do best -- and then delegating to the people what they do best, which is what machines can’t do. Is part of what you see in your design role at HPE going down that path of finding that balance? How will AI impact the way products are used and people interact with them in the future?

Expand what’s humanly possible

Jacquot: So, the ethics of design, I think, is a really rich topic. That’s a discussion all of itself. But I think the question specifically around AI and ML, is that there are things that you look at that could be possible. Some have experimented by putting bots that watch traffic on Twitter, and they start responding. And they often degenerate to a pretty bad place.

The whole AI and ML field is one where ethics are involved and require putting the right guardrails in place. That’s something we as an industry and as a population are going to have to watch closely, because it’s clear that just by nature, not everything goes in a positive direction.

And I think we are trying to use it in a way to make the humans better in what we are doing and making us more efficient.
One example I like to use is the autonomous vehicle, which is interesting to me because if you look at it from a human behind the wheel, we can see straight ahead. Or we can look in the rear-view mirror or the side mirrors, but we can basically see in one direction with a little bit of peripheral vision.

We can hear things in auditory, we can hear in omni-direction, but our senses are limited. On the other hand, an autonomous vehicle can look in 360 degrees, it’s empowered with it, it can use things like ultrasound and infrared to detect beyond what humans can see at night, for example, seeing animals on the side roads.

AI and ML in a vehicle are much more capable, and they don’t fatigue, they don’t get distracted. They don’t get angry and don’t get road rage. So, there are a lot of benefits that we as the users of those vehicles can benefit from, as long as we put the right guardrails in place that will actually make humans better at what they are doing and safer than when we are actually in charge behind the wheel.

We will use ML and AI to empower our users, whether it be developers, or admin to see better what’s happening. I think a great example of that is what we are doing with HPE InfoSight.

When we are ingesting massive amounts of data from our system and then using that to make better predictions and ensure making things happen when it needs to happen and making sure that if there is something that’s going wrong – it can be detected and addressed before it even becomes a problem and impacts business continuity. And that’s just one of the ways that we are using AI and ML. But I would say the big overriding thing with AI and ML is using it in a way to augment what we can do and making sure that ethics are first and foremost considered because it’s clear, just left on their own, things could go in directions that we probably don’t want them to.

Gardner: I’m afraid we will have to leave it there. We have been exploring how advances in design are enhancing the total experience for IT operators and more and more people inside of enterprises. And we’ve learned how the general philosophy and some best practices are making usability a key ingredient of modern hybrid IT systems.

So please join me in thanking our guest, Bryan Jacquot, Vice President and Chief Design Officer at HPE. Thank you so much, Bryan.

Jacquot: Thank you, Dana. It’s been my pleasure.


Gardner: And a big thank you as well to our audience for joining this BriefingsDirect Voice of the Innovator interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored discussions.

Thanks again for listening, please pass this along to your IT community, and don’t forget to come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how advances in design enhance the total experience for IT operators, making usability a key ingredient of modern hybrid IT systems. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in: