Wednesday, November 25, 2020

Here to Stay, Remote Work Promises to Deliver New Levels of Engagement, Productivity, and Innovation



Transcript of a discussion on new research into the future of work and how unprecedented innovation could mean a doubling of overall productivity in the coming years.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Citrix.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Gardner

The way people work has changed more in 2020 than the previous 10 years combined -- and that’s saying a lot. Even more than the major technological impacts of cloud, mobile, and big data, the COVID-19 pandemic has greatly accelerated and deepened global behavioral shifts.

The ways that people think about where and how to work may never be the same, and new technology alone could not have made such a rapid impact.

So now is the time to take advantage of a perhaps once-in-a-lifetime disruption for the better. Steps can be taken to make sure that such a sea change comes less with a price and more with a broad boon -- to both workers and businesses.

Stay with us now as we explore research into the future of work and how unprecedented innovation could very well mean a doubling of overall productivity in the coming years.


We’re here with a panel to hear insights on how a remote-first strategy leads to a reinvention of work expectations and payoffs. Please join me in welcoming our guests, Jeff Vincent, Chief Executive Officer at Lucid Technology Services. Welcome, Jeff.

Jeff Vincent: Good morning, everybody. Nice to meet you all.

Gardner: We’re also here with Ray Wolf, Chief Executive Officer at A2K Partners. Welcome, Ray.

Ray Wolf: Hey, Dana, it’s great to be part of the conversation.

Gardner: And lastly, we’re here with Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix.  Welcome back, Tim.

Tim Minahan: Thanks, Dana, it’s great to be with you.

Gardner: Tim, you’ve done some new research at Citrix. You’ve looked into what’s going on with the nature of work and a shift from what seems to be from chaos to opportunity. Tell us about the research and why it fosters such optimism.

Future of work belongs to tech

Minahan: Most of the world has been focused on the here-and-now, with how to get employees home safely, maintain business continuity, and keep employees engaged and productive in a prolonged work-from-home model. Yet we spent the bulk of the last year partnering with Oxford Analytica and Coleman Parkes to survey thousands of business and IT executives and to conduct qualitative interviews with C-level executives, academia, and futurists on what work is going to look like 15 years from now -- in 2035 -- and predict the role that technology will play.

Minahan
Certainly, we’re already seeing an acceleration of the findings from the report. And if there’s any iota of a silver lining in this global crisis we’re all living through, it’s that it has caused many organizations to rethink their operating models, business models, and their work models and workforce strategies.

Work has no-doubt forever changed. We’re seeing an acceleration of companies embracing new workforce strategies, reaching to pools of talent in remote locales using technology, and opening up access to skill sets that were previously too costly near their office and work hubs.

Now they can access talent anywhere, enabling and elevating the skill sets of all employees by leveraging artificial intelligence (AI) and machine learning (ML) to help them perform as their best employees. They are ensuring that they can embrace entirely new work models, possibly even the Uber-fication of work by tapping into recent retirees, work-from-home parents, and caregivers who had opted-out of the workforce -- not because they didn’t have the skills or expertise that folks needed – but because traditional work models didn’t support their home environment.

We’re seeing an acceleration of companies liberated by the fact that they realize work can happen outside of the office. Many executives across every industry have begun to rethink what the future of work is going to look like when we come out of this pandemic.

Gardner: Tim, one of the things that jumped out at me from your research was a majority feel that technology will make workers at least twice as productive by 2035. Why such a newfound opportunity for higher productivity, which had been fairly flat for quite a while? What has changed in behavior and technology that seems to be breaking us out of the doldrums when it comes to productivity?

Work 2035: Citrix Research
Reveals a More Intelligent Future
Minahan: Certainly, the doubling of employee productivity is a factor of a couple things. Number one, new more flexible work models allow employees to work wherever they can do their best work. But more importantly, it is the emergence of the augmented worker, using AI and ML to help not just offer up the right information at the right time, but help employees make more informed decisions and speed up the decision-making process, as well as automating menial tasks so employees can focus on the strategic aspects of driving creativity and innovation for the business. This is one of the areas we think is the most exciting as we look forward to the future.

Gardner: We’re going to dig into that research more in our discussion. But let’s go to Jeff at Lucid Technology Services. Tell us about Lucid, Jeff, and why a remote-first strategy has been a good fit for you.

Remote service keep SMBs safe

Vincent: Lucid Technology Services delivers what amounts to a fractional chief information officer (CIO) service. Small- to medium-sized businesses (SMBs) need CIOs but don’t generally have the working capital to afford a full-time, always-on, and always-there CIO or chief technology officer (CTO). That’s where we fill the gap.

Vincent
We bring essentially an IT department to SMBs, everything from budgeting to documentation -- and all points in between. And one of the big things that taught us to look forward is by looking backward. In 1908, Henry Ford gave us the modern assembly line, which promptly gave us the model T. And so horse-drawn buggy whip factories and buggy accessories suddenly became obsolete.

Something similar happened in the early 1990s. It was a fad called the Internet and it revolutionized work in ways that could not have been foreseen up to that point in time. We firmly believe that we’re on the precipice of another revolution of work just like then. The technology is mature at this point. We can move forward with it, using things like Citrix.

Gardner: Bringing a CIO-caliber function to SMBs sounds like it would be difficult to scale, if you had to do it in-person. So, by nature, you have been a pioneer in a remote-first strategy. Is it effective? Some people think you can’t be remote and be effective.

Vincent: Well, that’s not what we’ve been finding. This has been an evolution in my business for 20 years now. And the field has grown as the need has grown. Fortunately, the technology has kept pace with it. So, yes, I think we’re very effective.

Previously, let’s say a CPA firm of 15 providers, or a medical firm of three or four doctors with another 10 or so administrative and assistance staff on site all of the time, they had privileged information and data under regulation that needs safeguarding.

Well, if you are Arthur Andersen, a large, national firm, or Kaiser Permanente, or some really large corporation that has an entire team of IT staff on-site, then that isn’t really a problem. But when you’re under 25 to 50 employees, that’s a real problem because even if you were compromised, you wouldn’t necessarily know it.

If problems do develop, we can catch them when they're still small. And with such a light, agile team that's heavy on tech and the infrastructure behind it, a very few people can do the work of a lot of people.

We leverage monitoring technology, such as next-generation firewalls, and a team of people looking after that network operation center (NOC) and help desk to head those problems off at the pass. If problems do develop, we can catch them when they’re still small. And with such a light, agile team that’s heavy on tech and the infrastructure behind it, a very few people can do a lot of work for a lot of people. That is the secret sauce of our success.

Gardner: Jeff, from your experience, how often is it the CIO who is driving the remote work strategy?

Vincent: I don’t think remote work prior to the pandemic could have been driven from any other any other seat than the CIO/CTO. It’s his or her job. It’s their entire ethos to keep the finger on pulse of technology, where it’s going, and what it’s currently capable of doing.

In my experience, anybody else on the C-suite team has so much else going on. Everybody is wearing multiple hats and doing double-duty. So, the CTO is where that would have been driven.

But now, what I’ve seen in my own business, is that since the pandemic, as the CTO, I’m not generally leading the discussion -- I’m answering the questions. That’s been very exciting and one of the silver linings I’ve seen through this very trying time. We’re not forcing the conversation anymore. We are responding to the questions. I certainly didn’t envision a pandemic shutting down businesses. But clearly, the possibility was there, and it’s been a lot easier conversation [about remote work] to have over the past several months.

The nomadic way of work

Gardner: Ray, tell us about A2K Partners. What do you have in common with Jeff Vincent at Lucid about the perceived value of a remote-first strategy?

Wolf: A2K Partners is a digital transformation company. Our secret sauce is we translate technology into the business applications, outcomes, and impacts that people care about.

Wolf
Our company was founded by individuals who were previously in C-level business positions, running global organizations. We were the consumers of technology. And honestly, we didn’t want to spend a lot of time configuring the technologies. We wanted to speed things up, drive efficiency, and drive revenue and growth. So we essentially built the company around that.

We focus on work redesign, work orchestration, and employee engagement. We leverage platforms like Citrix for the future of work and for bringing in productivity enhancements to the actual processes of doing work. We ask, what’s the current state? What’s the future state? That’s where we spend a lot of our time.

As for a remote-first strategy, I want to highlight that our company is a nomadic company. We recruit people who want to live and work from anywhere. We think there’s a different mindset there. They are more apt to accept and embrace change. So untethered work is really key.

What we have been seeing with our clients -- and the conversations that we’re having currently today -- is the leaders of every organization, at every level, are trying to figure out how we come out of this pandemic better than when we went in. Some actually feel victims, and we’re encouraging them that this is really an opportunity.

Some statistics from the last three economic downturns: One very interesting one is that companies that started before the downturn in the bottom 20 percent emerged in the top 20 percent after the downturn. And you ask yourself, “How does a mediocre company all of a sudden rise to the top through a crisis?” This is where we’ve been spending time, in figuring out what plays they are running and how to better help them execute on it.

As Work Goes Virtual, Citrix Research Shows
Companies Need to Follow Talent Fleeing Cities
The companies that have decided to use this as a period to change the business model, change the services and products they’re offering, are doing it in stealth mode. They’re not noisy. There are no press releases. But I will tell you that next March, June, or September, what will come from them will create an Amazon-like experience for their customers and their employees.

Gardner: Tim, in listening to Jeff and Ray, it strikes me that they look at remote work not as the destination -- but the starting point. Is that what you’re starting to see? Have people reconciled themselves with the notion that a significant portion of their workforce will probably be remote? And how do we use that as a starting point -- and to what?

Minahan: As Jeff said, companies are rethinking their work models in ways they haven’t since Henry Ford. We just did OnePoll research polling with thousands of US-based knowledge workers. Some 47 percent have either relocated out of big metropolitan areas or are in the process of doing that right now. They can primarily because they’ve proven to themselves that they can be productive when not necessarily in the office.

Similarly, some 80 percent of companies are now looking at making remote work a more permanent part of their workforce strategy. And why is that? It is not just merely should Sam or Sally work in the office or work at home. No, they’re fundamentally rethinking the role of work, the workforce, the office, and what role the physical office should play.

And they’re seeing an opportunity, not just from real estate cost-reduction, but more so from access to talent. If we remember back nine months ago to before the great pandemic, we were having a different discussion. That discussion was the fact that there was a global talent shortage, according to McKinsey, of 95 million medium- to high-skilled workers.

That hasn’t changed. It was exacerbated at that time because we were organized around traditional work-hub models -- where you build an office, build a call center, and you try like heck to hire people from around that area. Of course, if you happen to build in a metropolitan area right down the street from one of your top competitors -- you can see the challenge.

In addition, there was a challenge around attaining the right skillsets to modernize and digitize your businesses. We’re also seeing an acceleration in the need for those skills because, candidly, very few businesses can continue to maintain their physical operations in light of the pandemic. They have had to go digital.

As companies rethink all of this, they're reviewing how to use technology to embrace a much more flexible work model, one that gives access to talent anywhere. I like the nomadic work concept.

And so, as companies are rethinking all of this, they’re reviewing how to use technology to embrace a much more flexible work model, one that gives access to talent anywhere, just as Ray indicated. I like the nomadic work concept.

Now, how do I use technology to even further raise the skillsets of all of my employees so they perform like the very best. This is where that interesting angle of AI and ML comes in, of being able to offer up the right insights to guide employees to the right next step in a very simple way. At the same time, that approach removes the noise from their day and helps them focus on the tasks they need to get done to be productive. It gives them the space to be creative and innovative and to drive that next level of growth for their company.

Gardner: Jeff, it sounds like the remote work and the future of work that Tim is describing sets us up for a force-multiplier when it comes to addressable markets. And not just addressable markets in terms of your customers, who can be anywhere, but also that your workers can be anywhere. Is that one of the things that will lead to a doubling of productivity?

Workers and customers anywhere

Vincent: Certainly. And the thing about truth is that it’s where you find it. And if it’s true in one area of human operations, it’s going to at least have some application in every other. For example, I live in the Central Valley of California. Because of our climate, the geology, and the way this valley was carved out of the hillside, we have a disproportionately high ability to produce food. So one of the major industries here in the Central Valley is agriculture.

You can’t do what we do here just anywhere because of all those considerations: climate, soil, and rainfall, when it comes. The fact that we have one of the tallest mountain ranges right next to us gives us tons of water, even if it doesn’t rain a lot here in Fresno. But you can’t outsource any of those things. You can’t move any of those things -- but that’s becoming a rarity.


If you focus on a remote-first workplace, you can source talent from anywhere; you can locate your business center anywhere. So you get a much greater recruiting tool both for clientele and for talent.

Another thing that has been driven by this pandemic is that people have been forced to go home, stay there, and work there. Either you’re going to figure out a way to get around the obstacles of not being able to go to the office or you’re going to have to close down, and nobody wants to do that. So they’ve learned to adapt, by and large.

And the benefits that we’re seeing are just manifold. They go into everything. Our business agility is much greater. The human considerations of your team members improve, too. They have had an artificial dichotomy between work responsibilities and home life. Think of a single parent trying to raise a family and put bread on the table.

Work Has Changed Forever, So That Experience
Now, with the remote-first workplace, it becomes much easier. Your son, your daughter, they have a medical appointment; they have a school need; they have something going on in the middle of the day. Previously you had to request time off, schedule around that, and move other team members into place. And now this person can go and be there for their child, or their aging parents, or any of the other hundreds of things that can go sideways for a family.

With a cloud-based workforce, that becomes much less of a problem. You have still got some challenges you’ve got to overcome, but there are fewer of them. I think everybody is reaping the benefits of that because with fewer people needing to be in the office, that means you can have a smaller office. Fewer people on the roads means less environmental impact of moving around and commuting for an hour twice a day.

Gardner: Ray Wolf, what is it about technology that is now enabling these people to be flexible and adaptive? What do you look for in technology platforms to give those people the tools they need?

Do more with less

Wolf: First, let’s talk about the current technology situation. The average worker out there has eight applications and 10 windows open. The way technology is provisioned to some of our remote workers is working against them. We have these technologies for all. Just because you give someone access to a customer relationship management (CRM) system or a human resources (HR) system doesn’t necessarily make them more productive. It doesn’t take into consideration how they like to do work. When you bring on new employees, it leaves it up to the individual to figure out how to get stuff done.

With the new platforms, Citrix Workspace with intelligence, for example, we’re able to take those mundane tasks and lock then into memory muscle through automation. And so, what we do is free-up time and energy using the Citrix platform. Then people can start moving and essentially upscaling, taking on higher cognitive tasks, and building new products and services.


That’s what we love about it. The other side is it’s no code and low code. The key here is just figuring out where to get started and making sure that the workers have their fingerprints on the plan because your worker today knows exactly where the inefficiencies are. They know where the frustration is. So we have a number of use cases that in the matter of six weeks, we were able to unlock almost a day per week worth of productivity gains, of which one of our customers in the sale spaces, a sales vice president, coined the word “proactivity.”

For them, they were taking that one extra day a week and starting to be proactive by pursuing new sales and leads and driving revenue where they just didn’t have the bandwidth before.

Through of our own polling of about 200 executives, we discovered that 50 percent of the companies are scaling down on their resources because they are unsure of the future. And that leaves them with the situation of doing more with less. That’s why the automation platforms are ideal for freeing up time and energy so they can deal with a reduced work force, but still gain the bandwidth to pursue new services and products. Then they can come out and be in that top 20 percent after the pandemic.

Gardner: Tim, I’m hearing Citrix Workspace referred to as an automation platform. How does Workspace not just help people connect, but helps them automate and accelerate productivity?

Keep talent optimized every day

Minahan: Ray put his finger on the pulse of the third dynamic we were seeing pre-pandemic, and it’s only been exacerbated. We talked first about the global shortage of medium- to high-skills talent. But then we talked about the acute shortage of digital skills that those folks need.

The third part is, if you’re lucky enough to have that talent, it’s likely they are very frustrated at work. A recent Gallup poll says 87 percent of employees are disengaged at work, and that’s being exacerbated by all of the things that Ray talked about. We’ve provided these workers with all of these tools, all these different channels, Teams and Slack and the like, and they’re meant to improve their performance in collaboration. But we have reached a tipping point of complexity that really has turned your top talent into task rabbits.

What Citrix does with our digital Workspace technology is it abstracts away all of that complexity. It provides unified access to everything an employee needs to be productive in one experience that travels with them. So, their work environment is this digital workspace -- no matter what device they are on, no matter what location they are at, no matter what work channel they need to navigate across.

What gets exciting now is the intelligence components. Infusing this with ML and AI automates away and guides an employee through their workday. It automates away those menial tasks so they can focus on what's important.

The second thing is it wrappers that in security, both secure access on the way in (I call it the bouncer at the front door), as well as ongoing contextual application of security policies. I call that the bodyguard who follows you around the club to make sure you stay out of trouble. And that gives IT the confidence that those employees can indeed work wherever they need to, and from whatever device they need to, with a level of comfort that their company’s information and assets are made secure.

But what gets exciting now is the intelligence components. Infusing this with ML and AI automates away and guides an employee through their work day. It automates away those menial tasks so they can focus on what’s important.

And that’s where folks like A2K come in. They can bring in their intellectual property and understanding of the business processes -- using those low- to no-code tools -- to actually develop extensions to the workspace that meet the needs of individual functions or individual industries and personalize the workspace experience for every individual employee.

Ray mentioned sales force productivity. They are also doing call center optimization. So, very, very discreet solutions that before required users to navigate across multiple different applications but are now handled through a micro app player that simplifies the engagement model for the employee, offering up the right insights and the right tasks at the right time so that they can do their very best work.

Gardner: Jeff Vincent, we have been talking about this in terms of worker productivity. But I’m wondering about leadership productivity. You are the CEO of a company that relies on remote work to a large degree. How do you find that tools like Citrix and remote-first culture works for you as a leader? Do you feel like you can lead a company remotely?

Workspace enhances leadership

Vincent: Absolutely. I’m trying to take a sip out of a fire hose, because everything I am hearing is exactly what we have been seeing -- just put a bit more eloquently and with a bit more data behind it -- for quite a long time now.

Leading a remote team really isn’t any different than leading a team that you look at. I mean, one of the aspects of leadership, as it pertains to this discussion, is having everybody know what is expected of them and when the due date is, enabling them with the tools they need to get the work done on time and on budget, right?

And with Citrix Workspace technology, the workflows automate expense report approvals, they automate calendar appointments, and automate the menial tasks that take up a lot of our time every single day. They now become seamless. They happen almost without effort. So that allows the leaders to focus on, “Okay, what does John need today to get done the task that’s going to be due in a month or in a quarter? Where are we at with this prospect or this leader or this project?”

And it allows everybody to take a moment, reflect on where they are, reflect on where they need to be, and then get more surgical with our people on getting there.

Gardner: Ray, also as a CEO, how do you see the intersection of technology, behavior, and culture coming together so that leaders like yourself are the ones going to be twice as productive?

Wolf: This goes to a human capital strategy, where you’re focusing on the numerator. So, the cost of your resources and the type of resource you need fit within a band. That’s the denominator.

The numerator is what productivity you get out of your workforce. There’s a number of things that have to come into play. It’s people, process, culture, and technology -- but not independent or operating in a silo.

And that’s the big opportunity Jeff and Tim are talking about here. Imagine when we start to bring system-level thinking to how we do work both inside and outside of our company. It’s the ecosystem, like hiring Ray Wolf as the individual contributor, yet getting 13 Ray Wolfs; that’s great.

But what happens if we orchestrate the work between finance, HR, the supply chain, and procurement? And then we take it an even bigger step by applying this outside of our company with partners?

How Lucid Technology Services Adapts
We’re working with a very large distributor right know with hundreds of resellers. In order to close deals, they have to get into the other partner’s CRM system. Well, today, that happens with about eight emails over a number of days. And that’s just inefficient. But with Citrix Workspace you’re able to cross-integrate processes inside and outside of your company in a secure manner, so that entire ecosystems work seamlessly. As an example, just think about the travel reservation systems, which are not owned by the airlines, but are still a heart-lung function for them, and they have to work in unison.

We’re really jazzed about that. How did we discover this? Two things. One, I’m an aerospace engineer by first degree, so I saw this come together in complex machines, like jet engines. And then, second, by running a global company, I was spending 80 hours a week trying to reconcile disparate data: One data set says sales were up, another that productivity was up, and then my profit margins go down. I couldn’t figure it out without spending a lot of hours.

And then we started a new way of thinking, which is now accelerated with the Citrix Workspace. Disparate systems can work together. It makes clear what needs to be done, and then we can move to the next level, which is democratization of data. With that, you’re able to put information in front of people in synchronization. They can see complex supply chains complete, they can close sales quicker, et cetera. So, it’s really awesome.

I think we’re still at the tip of the iceberg. The innovation that I’m aware of on the product roadmap with Citrix is just awesome, and that’s why we’re here as a partner.

Gardner: Tim, we’re hearing about the importance of extended enterprise collaboration and democratization of data. Is there anything in your research that shows why that’s important and how you’re using that understanding of what’s important to help shape the direction of Citrix products?

Augmented workers arrive

Minahan: As Ray said, it’s about abstracting away that lower-level complexity, providing all the integrations, the source systems, the service security model, and providing the underlying workflow engines and tools. Then experts like Lucid and A2K can extend that to create new solutions for driving business outcomes.

From the research, we can expect the emergence of the augmented worker, number one. We’re already beginning to see it with bots and robotic process automation (RPA) systems. But at Citrix we’re going to be moving to a much higher level, where it will do things similar to what Ray and Jeff were saying, abstracting away a lot of the menial tasks that can be automated. But we can also perform at a higher level, tasks at a much more informed and rapid pace through use of AI, which can compress and analyze massive amounts of data that would take us a very long time individually. ML can adapt and personalize that experience for us.

The research indicates that while robots will replace some tasks and jobs, they will also create many new jobs. You'll see a rise in demand for new roles, such as a bot or AI trainer, a virtual reality manager, and advanced data scientists.

Secondly, the research indicates that while robots will replace some tasks and jobs, they will also create many new jobs. And, according to our Work 2035 research, you’ll see a rise in demand for new roles, such as a bot or AI trainer, a virtual reality manager, advanced data scientists, privacy and trust managers, and design thinkers such as the folks at A2K and Lucid Technology Solutions are already doing. They are already working with clients to uncover the art of the possible and rethinking business process transformation.

Importantly, we also identified the need for flexibility of work. Shifting your mindset from thinking about a workforce in terms of full-time equivalents (FTEs) instead of pools of talent. And you understand the individual skillsets that you need and bring them together and assemble them rather quickly to address a certain project or issue that you have using digital Citrix Workspace technology, and then disassemble them just as quickly.

But you’ll also see a change in leadership. AI is going to take over a lot of those business decisions and possibly eliminate the need for some middle management teams. The bulk of our focus can be not so much managing as driving new creative ideas and innovation.

Gardner: I’d love to hear more from both Jeff and Ray about how businesses prepare themselves to best take advantage of the next stages of remote work. What do you tell businesses about thinking differently in order to take advantage of this opportunity?

Imagine what’s possible to work

Vincent: Probably the single biggest thing you can do to get prepared for the future of work is to rethink IT and your human capital, your team members. What do they need as a whole?

A business calls me up and says, “Our server is getting old, we need to get a new server.” And previously, I’d say, “Well, I don’t know if you actually need a server on-site, maybe we talk about the cloud.”

So educate yourself as a business leader on what out there is possible. Then take that step, listen to your IT staff, listen to your IT director, whoever that may be, and talk to them about what is out there and what’s really possible. The technology enabling remote work has grown exponentially, even in last few months, in its adoption and capabilities.

If you looked at the technology a year or two ago, that world doesn’t exist anymore. The technology has grown dramatically. The price point has come down dramatically. What is now possible wasn’t a few years ago.

So listen to your technology advisers, look at what’s possible, and prepare yourself for the next step. Take capital and reinvest it into the future of work.

Wolf: What we’re seeing that’s working the best is people are getting started anyway, anyhow. There really wasn’t a playbook set up for a pandemic, and it’s still evolving. We’re experiencing about 15 years’ worth of change in every three months of what’s going on. And there’s still plenty of uncertainty, but that can’t paralyze you.


We recommend that people fundamentally take a look at what your core business is. What do you do for a living? And then everything that enables you to do that is kind of ancillary or secondary.

When it comes to your workforce -- whether it’s comprised of contractors or freelancers or permanent employees -- no matter where they are, have a get stuff done mentality. It’s about what you are trying to get done. Don’t ask them about the systems yet. Just say, “What are you trying to get done?” And, “What will it take for you to double your speed and essentially only put half the effort into it?”

And listen. And then define, configure, and acquire the technologies that will enable that to happen. We need to think about what’s possible at the ground level, and not so much thinking about it all in terms of the systems and the applications. What are people trying to do every day and how do we make their work experience and their work life better so that they can thrive through this situation as well as the company?

Gardner: Tim, what did you find most surprising or unexpected in the research from the Work 2035 project? And is there a way for our audience to learn more about this Citrix research?

Minahan: One of the most alarming things to me from the Work 2035 project, the one where we’ve gotten the most visceral reaction, was the anticipation that, by 2035, in order to gain an advantage in the workplace, employees would literally be embedding microchips to help them process information and be far more productive in the workforce.

I’m interested to see whether that comes to bear or not, but certainly it’s very clear that the role of AI and ML -- we’re only scratching the surface as we drive to new work models and new levels of productivity. We’re already seeing the beginnings of the augmented worker and just what’s possible when you have bots sitting -- virtually and physically -- alongside employees in the workplace.

We’re seeing the future of work accelerate much quicker than we anticipated. As we emerge out the other side of the pandemic, with the guidance of folks like Lucid and A2K, companies are beginning to rethink their work models and liberate their thinking in ways they hadn’t considered for decades. So it’s an incredibly exciting time.

Gardner: And where can people go to learn more about your research findings at Citrix?

Minahan: To view the Work 2035 project, you can find the foundational research at Citrix.com, but this is an ongoing dialogue that we want to continue to foster with thought leaders like Ray and Jeff, as well as academia and governments, as we all prepare not just technically but culturally for the future of work.

Gardner: I’m afraid we’ll have to leave it there. You’ve been listening to a sponsored BriefingsDirect discussion on the impacts from a once-in-a-lifetime disruption of how people are thinking about work. And we’ve learned how a remote-first strategy can lead to reinvention of work expectations and payoffs -- including perhaps a doubling of overall productivity in the coming years.

So a big thank you to our guests, Jeff Vincent, CEO at Lucid Technology Solutions. Thank you so much, Jeff.

Vincent: Thank you guys. I really had a great time being a part of this discussion and I look forward to the next one.

Gardner: And a big thank you as well to Ray Wolf, CEO at A2K Partners. Thank you, Ray.

Wolf: You’re welcome, Dana. It’s great to explore this topic.

Gardner: And thank you also to Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix. Always a pleasure, Tim.

Minahan: Thanks Dana. I really enjoyed the discussion.


Gardner:
And lastly, a big thank you to our audience for joining this Briefings Direct remote work innovation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Citrix-supported BriefingsDirect discussions.

Thanks again for listening. Please pass this along to your community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Citrix.

Transcript of a discussion on new research into the future of work and how unprecedented innovation could mean a doubling of overall productivity in the coming years. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.

You may also be interested in:

Friday, November 20, 2020

How the Journey to Modern Data Management is Paved with an Inclusive Edge-to-Cloud Data Fabric


Transcript of a discussion on
the best ways widely inclusive data can be managed for today’s data-rich but too often insights-poor organizations. 

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

 

Dana Gardner: Hello, and welcome to the next BriefingsDirect Voice of Analytics Innovation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into end-to-end data management strategies.

Gardner

As businesses seek to gain insights for more elements of their physical edge -- from factory sensors, myriad machinery, and across field operations -- data remains fragmented. But a Data Fabric approach allows information and analytics to reside locally at the edge yet contribute to the global improvement in optimizing large-scale operations.

Stay with us now as we explore how edge-to-core-to-cloud dispersed data can be harmonized with a common fabric to make it accessible for use by more apps and across more analytics.

To learn more about the ways all data can be managed for today’s data-rich but too often insights-poor organizations, we’re joined by Chad Smykay, Field Chief Technology Officer for Data Fabric at Hewlett Packard Enterprise (HPE). Welcome, Chad.

 


Chad Smykay: Thank you.

 

Gardner: Chad, why are companies still flooded with data? It seems like they have the data, but they’re still thirsty for actionable insights. If you have the data, why shouldn’t you also have the insights readily available?

 

Smykay
Smykay: There are a couple reasons for that. We still see today challenges for our customers. One is just having a common data governance methodology. That’s not just to govern the security and audits, and the techniques around that -- but determining just what your data is.

 

I’ve gone into so many projects where they don’t even know where their data lives; just a simple matrix of where the data is, where it lives, and how it’s important to the business. This is really the first step that most companies just don’t do.

 

Gardner: What’s happening with managing data access when they do decide they want to find it? What’s been happening with managing the explosive growth of unstructured data from all corners of the enterprise?

 

Tame your data

 

Smykay: Five years ago, it was still the Wild West of data access. But we’re finally seeing some great standards being deployed and application programming interfaces (APIs) for that data access. Companies are now realizing there’s power in having one API to rule them all. In this case, we see mostly Amazon S3.

 

There are some other great APIs for data access out there, but just having more standardized API access into multiple datatypes has been great for our customers. It allows for APIs to gain access across many different use cases. For example, business intelligence (BI) tools can come in via an API. Or an application developer can access the same API. So that approach really cuts down on my access methodologies, my security domains, and just how I manage that data for API access.

 

Gardner: And when we look to get buy-in from the very top levels of businesses, why are leaders now rethinking data management and exploitation of analytics? What are the business drivers that are helping technologists get the resources they need to improve data access and management?

 

Smykay: The business drivers gain when data access methods are as reusable as possible across the different use cases. It used to be that you’d have different point solutions, or different open source tools, needed to solve a business use-case. That was great for the short-term, maybe with some quarterly project or something for the year you did it in.

Gaining a common, secure access layer that can access different types of data is the biggest driver of our HPE Data Fabric. And the business drivers gain when the data access methods are as reusable as possible.

 

But then, down the road, say three years out, they would say, “My gosh, we have 10 different tools across the many different use cases we’re using.” It makes it really hard to standardize for the next set of use cases.

 

So that’s been a big business driver, gaining a common, secure access layer that can access different types of data. That’s been the biggest driver for our HPE Data Fabric. That and having common API access definitely reduces the management layer cost, as well as the security cost.

 

Gardner: It seems to me that such data access commonality, when you attain it, becomes a gift that keeps giving. The many different types of data often need to go from the edge to dispersed data centers and sometimes dispersed in the cloud. Doesn’t data access commonality also help solve issues about managing access across disparate architectures and deployment models?

 

Smykay: You just hit the nail on the head. Having commonality for that API layer really gives you the ability to deploy anywhere. When I have the same API set, it makes it very easy to go from one cloud provider, or one solution, to another. But that can also create issues in terms of where my data lives. You still have data gravity issues, for example. And if you don’t have portability of the APIs and the data, you start to see some lock-in with the either the point solution you went with or the cloud provider that’s providing that data access for you.

 

Gardner: Following through on the gift that keeps giving idea, what is it about the Data Fabric approach that also makes analytics easier? Does it help attain a common method for applying analytics?

 

Data Fabric deployment options

 

Smykay: There are a couple of things there. One, it allows you to keep the data where it may need to stay. That could be for regulatory reasons or just depend on where you build and deploy the analytics models. A Data Fabric helps you to start separating out your computing and storage capabilities, but also keeps them coupled for wherever the deployment location is.

 


For example, a lot of our customers today have the flexibility to deploy IT resources out in the edge. That could be a small cluster or system that pre-processes data. They may typically slowly trickle all the data back to one location, a core data center or a cloud location. Having these systems at the edge gives them the benefit of both pushing information out, as well as continuing to process at the edge. They can choose to deploy as they want, and to make the data analytics solutions deployed at the core even better for reporting or modeling.

 

Gardner: It gets to the idea of act locally and learn globally. How is that important, and why are organizations interested in doing that?

 

Smykay: It’s just-in-time, right? We want everything to be faster, and that’s what this Data Fabric approach gets for you.

 

In the past, we’ve seen edge solutions deployed, but you weren’t processing a whole lot at the edge. You were pushing along all the data back to a central, core location -- and then doing something with that data. But we don’t have the time to do that anymore.

 

Unless you can change the laws of physics -- last time I checked, they haven’t done that yet -- we’re bound by the speed of light for these networks. And so we need to keep as much data and systems as we can out locally at the edge. Yet we need to still take some of that information back to one central location so we can understand what’s happening across all the different locations. We still want to make the rearview reporting better globally for our business, as well as allow for more global model management.

 

Gardner: Let’s look at some of the hurdles organizations have to overcome to make use of such a Data Fabric. What is it about the way that data and information exist today that makes it hard to get the most out of it? Why is it hard to put advanced data access and management in place quickly and easily?

 

Track the data journey

 

Smykay: It’s tough for most organizations because they can’t take the wings off the airplane while flying. We get that. You have to begin by creating some new standards within your organization, whether that’s standardizing on an API set for different datatypes, multiple datatypes, a single datatype.

 

Then you need to standardize the deployment mechanisms within your organization for that data. With the HPE Data Fabric, we give the ability to just say, “Hey, it doesn’t matter where you deploy. We just need some x86 servers and we can help you standardize either on one API or multiple APIs.”

 

We now support more than 10 APIs, as well as the many different data types that these organizations may have.

We see a lot of data silos out there today with customers -- and they're getting worse. They're now all over the place between multiple cloud providers. And there's all the networking in the middle. I call it silo sprawl.

 

Typically, we see a lot of data silos still out there today with customers – and they’re getting worse. By worse, I mean they’re now all over the place between multiple cloud providers. I may use some of these cloud storage bucket systems from cloud vendor A, but I may use somebody else’s SQL databases from cloud vendor B, and those may end up having their own access methodologies and their own software development kits (SDKs).

 

Next you have to consider all the networking in the middle. And let’s not even bring up security and authorization to all of them. So we find that the silos still exist, but they’ve just gotten worse and they’ve just sprawled out more. I call it the silo sprawl.

 

Gardner: Wow. So, if we have that silo sprawl now, and that complexity is becoming a hurdle, the estimates are that we’re going to just keep getting more and more data from more and more devices. So, if you don’t get a handle on this now, you’re never going to be able to scale, right?

 

Smykay: Yes, absolutely. If you’re going to have diversity of your data, the right way to manage it is to make it use-case-driven. Don’t boil the ocean. That’s where we’ve seen all of our successes. Focus on a couple of different use cases to start, especially if you’re getting into newer predictive model management and using machine learning (ML) techniques.

But, you also have to look a little further out to say, “Okay, what’s next?” Right? “What’s coming?” When you go down that data engineering and data science journey, you must understand that, “Oh, I’m going to complete use case A, that’s going to lead to use case B, which means I’m going to have to go grab from other data sources to either enrich the model or create a whole other project or application for the business.”

You should create a data journey and understand where you’re going so you don’t just end up with silo sprawl.

Gardner: Another challenge for organizations is their legacy installations. When we talk about zettabytes of data coming, what is it about the legacy solutions -- and even the cloud storage legacy -- that organizations need to rethink to be able to scale?

Zettabytes of data coming

Smykay: It’s a very important point. Can we just have a moment of silence? Because now we’re talking about zettabytes of data. Okay, I’m in.

Some 20 years ago, we were talking about petabytes of data. We thought that was a lot of data, but if you look out to the future, we’re talking about some studies showing connected Internet of Things (IoT) devices generating this zettabytes amount of data.


If you don’t get a handle on where your data points are going to be generated, how they’re going to be stored, and how they’re going to be accessed now, this problem is just going to get worse and worse for organizations.

Look, Data Fabric is a great solution. We have it, and it can solve a ton of these problems. But as a consultant, if you don’t get ahead of these issues right now, you’re going to be under the umbrella of probably 20 different cloud solutions for the next 10 years. So, really, we need to look at the datatypes that you’re going to have to support, the access methodologies, and where those need to be located and supported for your organization.

Gardner: Chad, it wasn’t that long ago that we were talking about how to manage big data, and Hadoop was a big part of that. NoSQL and other open source databases in particular became popular. What is it about the legacy of the big data approach that also needs to be rethought?

Smykay: One common issue we often see is the tendency to go either/or. By that I mean saying, “Okay, we can do real-time analytics, but that’s a separate data deployment. Or we can do batch, rearview reporting analytics, and that’s a separate data deployment.” But one thing that our HPE Data Fabric has always been able to support is both -- at the same time -- and that’s still true.

So if you’re going down a big data or data lake journey -- I think now the term now is a data lakehouse, that’s a new one. For these, basically I need to be able to do my real-time analytics, as well as my traditional BI reporting or rearview mirror reporting -- and that’s what we’ve been doing for over 10 years. That’s probably one of the biggest limitations we have seen.

But it’s a heavy lift to get that data from one location to another, just because of the metadata layer of Hadoop. And then you had dependencies with some of these NoSQL databases out there on Hadoop, it caused some performance issues. You can only get so much performance out of those databases, which is why we have NoSQL databases just out of the box of our Data Fabric -- and we’ve never run into any of those issues.

Gardner: Of course, we can’t talk about end-to-end data without thinking about end-to-end security. So, how do we think about the HPE Data Fabric approach helping when it comes to security from the edge to the core?

Secure data from edge to core

 

Smykay: This is near-and-dear to my heart because everyone always talks about these great solutions out there to do edge computing. But I always ask, “Well, how do you secure it? How do you authorize it? How does my application authorization happen all the way back from the edge application to the data store in the core or in the cloud somewhere?”

That’s what I call off-sprawl, where those issues just add up. If we don’t have one way to secure and manage all of our different data types, then what happens is, “Okay, well, I have this object-based system out there, and it has its own authorization techniques.” It has its own authentication techniques. By the way, it has its own way of enforcing security in terms of who has access to what, unless … I haven’t talked about monitoring, right? How do we monitor this solution?

So, now imagine doing that for each type of data that you have in your organization -- whether it’s a SQL database, because that application is just a driving requirement for that, or a file-based workload, or a block-based workload. You can see where this starts to steamroll and build up to be a huge problem within an organization, and we see that all the time.

We're seeing a ton of issues today in the security space. We're seeing people getting hacked. It happens all the way down to the application layer, as you often have security sprawl that makes it very hard to manage all of the different systems.

 

And, by the way, when it comes to your application developers, that becomes the biggest annoyance for them. Why? Because when they want to go and create an application, they have to go and say, “Okay, wait. How do I access this data? Oh, it’s different. Okay. I’ll use a different key.” And then, “Oh, that’s a different authorization system. It’s a completely different way to authenticate with my app.”

I honestly think that’s why we’re seeing a ton of issues today in the security space. It’s why we’re seeing people get hacked. It happens all the way down to the application layer, as you often have this security sprawl that makes it very hard to manage all of these different systems.

Gardner: We’ve come up in this word sprawl several times now. We’re sprawling with this, we’re sprawling with that; there’s complexity and then there’s going to be even more scale demanded.


The bad news is there is quite a bit to consider when you want end-to-end data management that takes the edge into consideration and has all these other anti-sprawl requirements. The good news is a platform and standards approach with a Data Fabric forms the best, single way to satisfy these many requirements.

So let’s talk about the solutions. How does HPE Ezmeral generally -- and the Ezmeral Data Fabric specifically -- provide a common means to solve many of these thorny problems?

Smykay: We were just talking about security. We provide the same security domain across all deployments. That means having one web-based user interface (UI), or one REST API call, to manage all of those different datatypes.

We can be deployed across any x86 system. And having that multi-API access -- we have more than 10 – allows for multi-data access. It includes everything from storing data into files and storing data in blocks. We’re soon going to be able to support blocks in our solution. And then we’ll be storing data into bit streams such as Kafka, and then into a NoSQL database as well.

Gardner: It’s important for people to understand that HPE Ezmeral is a larger family and that the Data Fabric is a subset. But the whole seems to be greater than the sum of the parts. Why is that the case? How has what HPE is doing in architecting Ezmeral been a lot more than data management?

Smykay: Whenever you have this “whole is greater than the sum of the parts,” you start reducing so many things across the chain. When we talk about deploying a solution, that includes, “How do I manage it? How do I update it? How do I monitor it?” And then back to securing it.

Honestly, there is a great report from IDC that says it best. We show a 567-percent, five-year return on investment (ROI). That’s not from us, that’s IDC talking to our customers. I don’t know of a better business value from a solution than that. The report speaks for itself, but it comes down to these paper cuts of managing a solution. When you start to have multiple paper cuts, across multiple arms, it starts to add up in an organization.

Gardner: Chad, what is it about the HPE Ezmeral portfolio and the way the Data Fabric fits in that provides a catalyst to more improvement?

 

All data put to future use

 

Smykay: One, the HPE Data Fabric can be deployed anywhere. It can be deployed independently. We have hundreds and hundreds of customers. We have to continue supporting them on their journey of compute and storage, but today we are already shipping a solution where we can containerize the Data Fabric as a part of our HPE Ezmeral Container Platform and also provide persistent storage for your containers.

 

The HPE Ezmeral Container Platform comes with the Data Fabric, it’s a part of the persistent storage. That gives you full end-to-end management of the containers, not only the application APIs. That means the management and the data portability.

 

So, now imagine being able to ship the data by containers from your location, as it makes sense for your use case. That’s the powerful message. We have already been on the compute and storage journey; been down that road. That road is not going away. We have many customers for that, and it makes sense for many use cases. We’ve already been on the journey of separating out compute and storage. And we’re in general availability today. There are some other solutions out there that are still on a road map as far as we know, but at HPE we’re there today. Customers have this deployed. They’re going down their compute and storage separation journey with us.

 

Gardner: One of the things that gets me excited about the potential for Ezmeral is when you do this right, it puts you in a position to be able to do advanced analytics in ways that hadn’t been done before. Where do you see the HPE Ezmeral Data Fabric helping when it comes to broader use of analytics across global operations?

 

Smykay: One of our CMOs used to say it best, and which Jack Morris has said: “If it’s going to be about the data, it better be all about the data.”

 


When you improve automating data management across multiple deployments -- managing it, monitoring it, keeping it secure -- you can then focus on those actual use cases. You can focus on the data itself, right? That’s living in the HPE Data Fabric. That is the higher-level takeaway. Our users are not spending all their time and money worrying about the data lifecycle. Instead, they can now go use that data for their organizations and for future use cases.

 

HPE Ezmeral sets your organization up to use your data instead of worrying about your data. We are set up to start using the Data Fabric for newer use cases and separating out compute and storage, and having it run in containers. We’ve been doing that for years. The high-level takeaway is you can go focus on using your data and not worrying about your data.

 

Gardner: How about some of the technical ways that you’re doing this? Things like global namespaces, analytics-ready fabrics, and native multi-temperature management. Why are they important specifically for getting to where we can capitalize on those new use cases?

 

Smykay: Global namespaces is probably the top feature we hear back from our customers on. It allows them to gain one view of the data with the same common security model. Imagine you’re a lawyer sitting at your computer and you double-click on a Data Fabric drive, you can literally then see all of your deployments globally. That helps with discovery. That helps with bringing onboard your data engineers and data scientists. Over the years that’s been one of the biggest challenges, they spend a lot of time building up their data science and data engineering groups and on just discovering the data.

 

Global namespace means I’m reducing my discovery time to figure out where the data is. A lot of this analytics-ready value we’ve been supporting in the open source community for more than 10 years. There’s a ton of Apache open source projects out there, like Presto, Hive, and Drill. Of course there’s also Spark-ready, and we have been supporting Spark for many years. That’s pretty much the de facto standard we’re seeing when it comes to doing any kind of real-time processing or analytics on data.

 

As for multi-temperature, that feature allows you to decrease your cost of your deployment, but still allows managing all your data in one location. There are a lot of different ways we do that. We use erasure coding. We can tear off to Amazon S3-compliant devices to reduce the overall cost of deployment.

 

These features contribute to making it still easier. You gain a common Data Fabric, common security layer, and common API layer.

 

Gardner: Chad, we talked about much more data at the edge, how that’s created a number of requirements, and the benefits of a comprehensive approach to data management. We talked about the HPE Data Fabric solution, what it brings, and how it works. But we’ve been talking in the abstract.

 

What about on the ground? Do you have any examples of organizations that have bitten off and made Data Fabric core for them? As an adopter, what do they get? What are the business outcomes?

 

Central view benefits businesses

 

Smykay: We’ve been talking a lot about edge-to-core-to-cloud, and the one example that’s just top-of-mind is a big, tier-1 telecoms provider. This provider makes the equipment for your AT&Ts and your Vodafones. That equipment sits out on the cell towers. And they have many Data Fabric use cases, more than 30 with us.

 

But the one I love most is real-time antenna tuning. They’re able to improve customer satisfaction in real time and reduce the need to physically return to hotspots on an antenna. They do it via real-time data collection on the antennas and then aggregating that across all of the different layers that they have in their deployments.

One example is real-time antennae tuning. They're able to improve customer satisfaction in real time and reduce the need to physically return to hotspots on an antennae. They do it instead via real-time data collection and aggregating that across all of their deployments.

 

They gain a central view of all of the data using a modern API for the DevOps needs. They still centrally process data, but they also process it at the edge today. We replicate all of that data for them. We manage that for them and take a lot of the traditional data management tasks off the table for them, so they can focus on the use case of the best way to tune antennas.

 

Gardner: They have the local benefit of tuning the antenna. But what’s the global payback? Do we have a business quantitative or qualitative returns for them in doing that?

 

Smykay: Yes, but they’re pretty secretive. We’ve heard that they’ve gotten a payback in the millions of dollars, but an immediate, direct payback for them is in reducing the application development spend everywhere across the layer. That reduction is because they can use the same type of API to publish that data as a stream, and then use the same API semantics to secure and manage it all. They can then take that same application, which is deployed in a container today, and easily deploy it to any remote location around the world.

 

Gardner: There’s that key aspect of the application portability that we’ve danced around a bit. Any other examples that demonstrate the adoption of the HPE Data Fabric and the business pay-offs?

 

Smykay: Another one off the top of my head is a midstream oil and gas customer in the Houston area. This one’s not so much about edge-to-core-to-cloud. This is more about consolidation of use cases.

 

We discussed earlier that we can support both rearview reporting analytics as well as real-time reporting use cases. And in this case, they actually have multiple use cases, up to about five or six right now. Among them, they are able to do predictive failure reports for heat exchangers. These heat exchangers are deployed regionally and they are really temperamental. You have to monitor them all the time.

 

But now they have a proactive model where they can do a predictive failure monitor on those heat exchangers just by checking the temperatures on the floor cameras. They bring in all real-time camera data and they can predict, “Oh, we think we’re having an issue with this heat exchanger on this time and this day.” So that decreases management cost for them.

 

They also gain a dynamic parts management capability for all of their inventory in their warehouses. They can deliver faster, not only on parts, but reduce their capital expenditure (CapEx) costs, too. They have gained material measurement balances. When you push oil across a pipeline, they can detect where that balance is off across the pipeline and detect where they’re losing money, because if they are not pushing oil across the pipe at x amount of psi, they’re losing money.

 

So they’re able to dynamically detect that and fix it along the pipe. They also have a pipeline leak detection that they have been working on, which is modeled to detect corrosion and decay.

 

The point is there are multiple use cases. But because they’re able to start putting those data types together and continue to build off of it, every use case gets stronger and stronger.

 

Gardner: It becomes a virtuous adoption cycle; the more you can use the data generally, then the more value, then the more you invest in getting a standard fabric approach, and then the more use cases pop up. It can become very powerful.

 

This last example also shows the intersection of operational technology (OT) and IT. Together they can start to discover high-level, end-to-end business operational efficiencies. Is that what you’re seeing?

 

Data science teams work together

 

Smykay: Yes, absolutely. A Data Fabric is kind of the Kumbaya set among these different groups. If they’re able to standardize on the IT and developer side, it makes it easier for them to talk the same language. I’ve seen this with the oil and gas customer. Now those data science and data engineering teams work hand in hand, which is where you want to get in your organization. You want those IT teams working with the teams managing your solutions today. That’s what I’m seeing. As you get a better, more common data model or fabric, you get faster and you get better management savings by having your people working better together.

 

Gardner: And, of course, when you’re able to do data-driven operations, procurement, logistics, and transportation you get to what we’re referring generally as digital business transformation.

 

Chad, how does a Data Fabric approach then contribute to the larger goal of business transformation?

 

Smykay: It allows organizations to work together through a common data framework. That’s been one of the biggest issues I’ve seen, when I come in and say, “Okay, we’re going to start on this use case. Where is the data?”

 

Depending on size of the organization, you’re talking to three to five different groups, and sometimes 10 different people, just to put a use case together. But as you create a common data access method, you see an organization where it’s easier and easier for not only your use cases, but your businesses to work together on the goal of whatever you’re trying to do and use your data for.

 

Gardner: I’m afraid we’ll have to leave it there. We’ve been exploring how a Data Fabric approach allows information and analytics to reside locally at the edge, yet contribute to a global improvement in optimizing large-scale operations.

 

And we’ve learned how HPE Ezmeral Data Fabric makes modern data management more attainable so businesses can dramatically improve their operational efficiency and innovate from edge to core to clouds.

 


So please join me in thanking our guest, Chad Smykay, Field Chief Technology Officer for Data Fabric at HPE. Thanks so much, Chad.

 

Smykay: Thank you, I appreciate it.

 

Gardner: And a big thank you as well to our audience for joining this sponsored BriefingsDirect Voice of Analytics Innovation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-supported discussions.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on the best ways widely inclusive data can be managed for today’s data-rich but too often insights-poor organizations. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.

You may also be interested in: