Showing posts with label Attunity. Show all posts
Showing posts with label Attunity. Show all posts

Monday, July 08, 2019

Qlik’s Top Researcher Describes New Ways for Human Cognition and Augmented Intelligence to Join Forces

https://www.qlik.com/us

Transcript of a discussion on how the latest research and products bring the power of people and machine intelligence closer together to make analytics consumable across more business processes.
 
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Qlik

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Our next business intelligence (BI) trends discussion explores the latest research and products that bring the power of people and machine intelligence closer together.

Gardner
As more data becomes available to support augmented intelligence -- and the power of analytics platforms increasingly goes to where the data is -- the next stage of value is in how people can interact with the results.

Stay with us now as we examine the latest strategies for not only visualizing data-driven insights but making them conversational and even presented through a form of storytelling.

To learn more about making the consumption and refinement of analytics delivery an interactive exploit open to more types of users, we are now joined by Elif Tutuk, Head of Research at Qlik. Welcome to BriefingsDirect.

Elif Tutuk: Thank you. It’s a great pleasure to be here.


Gardner: Strides have been made in recent years for better accessing data and making it available to analytics platforms, but the democratization of the results and making insights consumable by more people is just beginning. What are the top technical and human interaction developments that will broaden the way that people interact differently with analytics?

Trusted data for all


Tutuk: That’s a great question. We are doing a lot of research in this area in terms of creating new user experiences where we can bring about more data literacy and help improve people’s understanding of reading, analyzing, and arguing with the data.

Tutuk
In terms of the user experience, a conversational aspect has a big impact. But we also believe that it’s not only through the conversation, especially when you want to understand data. The visual exploration part should also be there. We are creating experiences that combine the unique nature, language, and visual exploration capabilities of a human. We think that it is the key to building a good collaboration between the human and the machine.

Gardner: As a result, are we able to increase the number and types of people impacted by data by going directly to them -- rather than through a data scientist or an IT department? How are the interaction elements broadening this to a wider clientele?

Tutuk: The idea is to make analysis available from C-level users to the business end users.

If you want to broaden the use of analytics and lower the barrier, you also need to make sure that the data machines and the system are governed and trusted.

Our enterprise data management strategy therefore becomes important for our Cognitive Engine technology. We are combining those two so that the machines use a governed data source to provide trusted information.

Gardner: What strikes me as quite new now is more interaction between human cognition and augmented intelligence. It’s almost a dance. It creates new types of insights, and new and interesting things can happen.

How do you attain the right balance in the interactions between human cognition and AI?

Tutuk: It is about creating experiences between what the human is good at -- perception, awareness, and ultimately decision-making -- and what the machine technology is good at, such as running algorithms on large amounts of data.

As the machine serves insights to the user, it needs to first create trust about what data is used and the context around it. Without the context you cannot really take that insight and make an action on it. And this is where the human part comes in, because as humans you have the intuition and the business knowledge to understand the context of the insight. Then you can explore it further by being augmented. Our vision is for making decisions by leveraging that [machine-generated] insight.

Gardner: In addition to the interactions, we are hearing about the notion of storytelling. How does that play a role in ways that people get better analytics outcomes?

Storytelling insights support


Tutuk: We have been doing a lot of research and thinking in this area because today, in the analytics market, AI is becoming robust. These technologies are developing very well. But the challenge is that most of the technologies provide results like a black box. As a user, you don’t know why the machine is making a suggestion and insight. And that creates a big trust issue.

To have greater adoption of the AI results, you need to create an experience that builds trust, and that is why we are looking at one of the most effective and timeless forms of communication that humans use, which is storytelling.
To have greater adoption of the AI results, you need to create an experience that builds trust, and that is why we are looking at one of the most effective and timeless forms of communication that humans use, which is storytelling.

So we are creating unique experiences where the machine generates an insight. And then, on the fly, we create data stories generated by the machine, thereby providing more context. As a user, you can have a great narrative, but then that narrative is expanded with insightful visualizations. From there, based on what you gain from the story, we are also looking at capabilities where you can explore further.

And in that third step you are still being augmented, but able to explore. It is user-driven. That is where you start introducing human intuition as well.

And when you think about the machine first surfacing insights, then getting more context with the data story, and lastly going to exploration -- all three phases can be tied together in a seamless flow. You don’t lose the trust of the human. The context becomes really important. And you should be able to carry the context between all of the stages so that the user knows what the context is. Adding the human intuition expands that context.

Gardner: I really find this fascinating because we are talking not just about problem-solution, we are talking about problem-solution-resolution, then readjusting and examining the problem for even more solution and resolution. We are also now, of course, in the era of augmented reality, where we can bring these types of data analysis outputs to people on a factory floor, wearing different types of visual and audio cue devices.

So the combination of augmented reality, augmented intelligence, storytelling, and bringing it out to the field strikes me as something really unprecedented. Is that the case? Are we charting an entirely new course here?

Tutuk: Yes, I think so. It’s an exciting time for us. I am glad that you pointed out the augmented reality because it’s another research area that we are looking at. One of the research projects we have done augments people on retail store floors, the employees.

The idea is, if you are trying to do shelf arrangement, for example, we can provide them information -- right when they look at the product – about that product and what other products are being sold together. Then, right away at that moment, they are being augmented and they will make a decision. It’s an extremely exciting time for us, yes.

Gardner: It throws the idea of batch-processing out the window. You used to have to run the data, come up with report, and then adjust your inventory. This gets directly to the interaction with the end-consumer in mind and allows for entirely new types of insights and value.

https://www.qlik.com/us
Tutuk: As part of that project, we also allow for being able to pin things on the space. So imagine that you are in a warehouse, looking at a product, and you develop an interesting insight. Now you can just pin it on the space on that product. And as you do that on different products, you can take a step back, take a look, and discover different insights on the product.

The idea is having a tray that you carry with you, like your own analytics coming with you, and when you find something interesting that matches with the tray – with, for example, the product that you are looking at -- you can pin it. It’s like having a virtual board with products and with the analytics being augmented reality.

Gardner: We shouldn’t lose track that we are often talking about billions of rows of data supporting this type of activity, and that new data sets can be brought to bear on a problem very rapidly.

Putting data in context with AI2


Tutuk: Exactly, and this is where our Associative Big Data Index technology comes into play. We are bringing the power of our unique associative engine to massive datasets. And, of course, with the latest acquisition that we have done with Attunity, we gain data streaming and real-time analytics.

Gardner: Digging down to the architecture to better understand how it works, the Qlik cognitive engine increasingly works with context awareness. I have heard this referred to as AI2. What do you all mean by AI2?

Tutuk: AI2 is augmented intelligence powered by an associative index. So augmented intelligence is our vision for the use of artificial intelligence, where the goal is to augment the human, not to replace them. And now we are making sure that we have the unique component in terms of our associative index as well.

Allow me to explain the advantage of the associative index. One of the challenges for using AI and machine learning is bias. The system has bias because it doesn’t have access to all of the data.
With the associative index, our technology provides a system with visibility to all of the data at any point, including the data that is associated with your context, and also what's not associated. That part provides a good learning source for the algorithms that we are using.

For example, you maybe are trying to make a prediction for churn analysis in the western sales region. Normally if you select the west region the system -- if the AI is running with a SQL or relational database -- it will only have access to that slice of data. It will never have the chance to learn what is not associated, such as the customers from the other regions, to look at their behavior.

With the associative index, our technology provides a system with visibility to all of the data at any point, including the data that is associated with your context, and also what’s not associated. And that part that is not associated provides a good learning source for the algorithms that we are using. This is where we are differentiating ourselves and providing unique insights to our users that will be very hard to get with an AI tool that works only with SQL and relational data structures.

Gardner: Not only is Qlik is working on such next-generation architectures, you are also undertaking a larger learning process with the Data Literacy Program to, in a sense, make the audience more receptive to the technology and its power.

Please explain, as we move through this process of making intelligence accessible and actionable, how we can also make democratization of analytics possible through education and culturally rethinking the process.

Data literacy drives cognitive engine


Tutuk: Data literacy is important to help make people able to read, analyze, and argue with the data. We have an open program -- so you don’t have to be a Qlik customer. It’s now available. Our goal is to make everyone data literate. And through that program you can firstly understand the data literacy level of your organization. We have some free tests you can take, and then based on that need we have materials to help people to become data literate.


As we build the technology, our vision with AI is to make the analytics platform much easier to use in a trusted way. So that’s why our vision is not only focused on prescriptive probabilities, it’s focused on the whole analytics workflow -- from data acquisition, to visualization, exploration, and sharing. You should always be augmented by the system.

We are at just the beginning of our cognitive framework journey. We introduced Qlik Cognitive Engine last year, and since then we have exposed more features from the framework in different parts of the product, such as on the data preparation. Our users, for example, get suggestions on the best way of associating data coming from different data sources.

And, of course, on the visualization part and dashboarding, we have visual insights, where the Cognitive Engine right away suggests insights. And now we are adding natural language capabilities on top of that, so you can literally conversationally interact with the data. More things will be coming on that.

https://community.qlik.com/t5/Qlik-Product-Innovation-Blog/Qlik-Insight-Bot-an-AI-powered-bot-for-conversational-analytics/ba-p/1555552
Gardner: As an interviewer, as you can imagine, I am very fond of the Socratic process of questioning and then reexamining. It strikes me that what you are doing with storytelling is similar to a Socratic learning process. You had an acquisition recently that led to the Qlik Insight Bot, which to me is like interviewing your data analysis universe, and then being able to continue to query, and generate newer types of responses.

Tell us about how the Qlik Insight Bot works and why that back-and-forth interaction process is so powerful.

Tutuk: We believe any experiences you have with the system should be in the form of a conversation, it should have a conversational nature. There’s a unique thing about human-to-human conversation – just as we are having this conversation. I know that we are talking about AI and analytics. You don’t have to tell me that as we are talking. We know we are having a conversation about that.

That is exactly what we have achieved with the Qlik Insight Bot technology. As you ask questions to the Qlik Insight Bot, it is keeping track of the context. You don’t have to reiterate the context and ask the question with the context. And that is also a unique differentiator when you compare that experience to just having a search box, because when you use Google, it doesn’t, for example, keep the context. So that’s one of the important things for us to be able to keep -- to have a conversation that allows the system to keep the context.

Gardner: Moving to the practical world of businesses today, we see a lot of use of Slack and Microsoft Teams. As people are using these to collaborate and organize work, it seems to me that presents an opportunity to bring in some of this human-level cognitive interaction and conversational storytelling.

Do you have any examples of organizations implementing this with things like Slack and Teams?

Collaborate to improve processes


Tutuk: You are on the right track. The goal is to provide insights wherever and however you work. And, as you know, there is a big trend in terms of collaboration. People are using Slack instead of just emailing, right?

So, the Qlik Insight Bot is available with an integration to Microsoft Teams, Slack, and Skype. We know this is where the conversations are happening. If you are having a conversation with a colleague on Slack and neither of the parties know the answer, then right away they can just continue their conversation by including Qlik Insight Bot and be powered with the Cognitive Engine insights that they can make decisions with right away.

Gardner: Before we close out, let’s look to the future. Where do you take this next, particularly in regard to process? We also hear a lot these days about robotic process automation (RPA). There is a lot of AI being applied to how processes can be improved and allowing people to do what they do best.
The Qlik insight Bot is available with an integration to Microsoft Teams, Slack, and Skype. We know this is where the conversations are happening. They can just continue their conversation by including the Qlik Insight Bot and be powered with the Cognitive Engine insights that they can make decisions with.

Do you see an opportunity for the RPA side of AI and what you are all doing with augmented intelligence and the human cognitive interactions somehow reinforcing one another?

Tutuk: We realized with RPA processes that there are challenges with the data there as well. It’s not only about the human and the interaction of the human with the automation. Every process automation generates data. And one of the things that I believe is missing right now is to have a full view on the full automation process. You may have 65 different robots automating different parts of a process, but how do you provide the human a 360-degree view of how the process is performing overall?

A platform can gather associated data from different robots and then provide the human a 360-degree view of what’s going on in the processes. Then that human can make decisions, again, because as humans we are very good at making decisions by seeing nonlinear connections. Feeding the right data to us to be able to use that capability is very important, and our platform provides that.

Gardner: Elif, for organizations looking to take advantage of all of this, what should they be doing now to get ready? To set the foundation, build the right environment, what should enterprises be doing to be in the best position to leverage and exploit these capabilities in the coming years?

Replace repetitive processes


Tutuk: Look for the processes that are repetitive. Those aren’t the right places to use unique human capabilities. Determine those repetitive processes and start to replace them with machines and automation.

Then make sure that whatever data that they are feeding into this is trustable and comes from a governed environment. The data generated by those processes should be governed as well. So have a governance mechanism around those processes.

I also believe there will be new opportunities for new jobs and new ideas that the humans will be able to start doing. We are at an exciting new era. It’s a good time to find the right places to use human intelligence and creativity just as more automation will happen for repetitive tasks. It’s an incredible and exciting time. It will be great.

Gardner: These strike me as some of the most powerful tools ever created in human history, up there with first wheel and other things that transformed our existence and our quality of life. It is very exciting.

I’m afraid we will have to leave it there. You have been listening to a sponsored BriefingsDirect discussion on the latest research and products that bring the power of people and augmented intelligence closer than ever.

And we have learned about strategies for not only visualizing data-driven insights but making them conversational -- and even presented through storytelling. So a big thank you to our guest, Elif Tutuk, Head of Research at Qlik. Thank you very much.

Tutuk: Thank you very much.


Gardner: And a big thank you to our audience as well for joining this BriefingsDirect business intelligence trends discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Qlik-sponsored BriefingsDirect interviews.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Qlik.
 
Transcript of a discussion on how the latest research and products bring the power of people and machine intelligence closer together to make analytics consumable across more business processes. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in:

Thursday, June 20, 2019

Qlik’s CTO on Why the Cloud Data Diaspora Forces Businesses to Rethink their Analytics Strategies


Transcript of a discussion on why new ways of thinking are demanded if comprehensive analysis of relevant data can become practical across a multi- and hybrid-cloud deployments world.
 
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Qlik.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Our next business intelligence (BI) trends discussion explores the impact of dispersed data in a multicloud world.

Gardner
Gaining control over far-flung and disparate data has been a decades’ old struggle, but now as hybrid and public clouds join the mix of legacy and distributed digital architectures, new ways of thinking are demanded if comprehensive analysis of relevant data is going to become practical.

Stay with us now as we examine the latest strategies for making the best use of data integration, data catalogs and indices, as well highly portable data analytics platforms.

To learn more about closing the analysis gap between data and multiple -- and most probably changeable -- cloud models, we are now joined by Mike Potter, Chief Technology Officer (CTO) at Qlik. Welcome, Mike.

Mike Potter: Hi, I’m glad to be here.

Gardner: Mike, businesses are adopting cloud computing for very good reasons. The growth over the past decade has been strong and accelerating. What have been some of the -- if not unintentional -- complicating factors for gaining a comprehensive data analysis strategy amid this cloud computing complexity?

Potter: The biggest thing is recognizing that it’s all about where data lives and where it's being created. Obviously, historically most data have been generated on-premises. So, there is a strong pull there, but you are seeing more and more cases now where data is born in the cloud and spends its whole lifetime in the cloud.

Potter
And so now the use cases are different because you have a combination of those two worlds, on-premises and cloud. To add further complexity, data is now being born in different cloud providers. Not only are you dealing with having some data and legacy systems on-premises, but you may have to reconcile that you have data in Amazon, Google, or Microsoft.

Our whole strategy around multicloud and hybrid cloud architectures is being able to deploy Qlik where the data lives. It allows you to leave the data where it is, but gives you options so that if you need to move the data, we can support the use cases on-premises to cloud or across cloud providers.

Gardner: And you haven’t just put on the patina of cloud-first or software as a service (Saas) -first. You have rearchitected and repositioned a lot of what your products and technologies do. Tell us about being “SaaS-first” as a strategy.

Scaling the clouds


Potter: We began our journey about 2.5 years ago, when we started converting our monolith architecture into a microservices-based architecture. That journey struck to the core of the whole product.

Qlik’s heritage was a Windows Server architecture. We had to rethink a lot of things. As part of that we made a big bet 1.5 years ago on containerization, using Docker and Kubernetes. And that’s really paid off for us. It has put us ahead of the technology curve in many respects. When we did our initial release of our multicloud product in June 2018, I had conversations with customers who didn’t know what Kubernetes was.

One enterprise customer had an infrastructure team who had set up an environment to provision Kubernetes cluster environments, but we were only the second vendor that required one, so we were ahead of the game quite a bit.

Gardner: How does using a managed container platform like Kubernetes help you in a multicloud world?

https://www.qlik.com/us
Potter: The single biggest thing is it allows you to scale and manage workloads at a much finer grain of detail through auto-scaling capabilities provided by orchestration environments such as Kubernetes.

More importantly it allows you to manage your costs. One of the biggest advantages of a microservice-based architecture is that you can scale up and scale down to a much finer grain. For most on-premises, server-based, monolith architectures, customers have to buy infrastructure for peak levels of workload. We can scale up and scale down those workloads -- basically on the fly -- and give them a lot more control over their infrastructure budget. It allows them to meet the needs of their customers when they need it.

Gardner: Another aspect of the cloud evolution over the past decade is that no one enterprise is like any other. They have usually adopted cloud in different ways.

Has Qlik’s multicloud analytics approach come with the advantage of being able to deal with any of those different topologies, enterprise by enterprise, to help them each uniquely attain more of a total data strategy?

Potter: Yes, I think so. The thing we want to focus on is, rather than dictate the cloud strategy – often the choice of our competitors -- we want to support your cloud strategy as you need it. We recognize that a customer may not want to be on just one cloud provider. They don’t want to lock themselves in. And so we need to accommodate that.

There may be very valid reasons why they are regionalized, from a data sovereignty perspective, and we want to accommodate that.

There will always be on-premises requirements, and we want to accommodate that.

The reality is that, for quite a while, you are not going to see as much convergence around cloud providers as you are going to see around microservices architectures, containers, and the way they are managed and orchestrated.
You are not going to see as much convergence around cloud providers as you are going to see around microservices architectures, containers, and the way they are managed and orchestrated.

Gardner: And there is another variable in the mix over the next years -- and that’s the edge. We have an uncharted, immature environment at the edge. But already we are hearing that a private cloud at the edge is entirely feasible. Perhaps containers will be working there.

At Qlik, how are you anticipating edge computing, and how will that jibe with the multicloud approach?

Running at the edge


Potter: One of the key features of our platform architecture is not only can we run on-premises or in any cloud at scale, we can run on an edge device. We can take our core analytics engine and deploy it on a device or machine running at the edge. This enables a new opportunity, which is taking analytics itself to the edge.

A lot of Internet of Things (IoT) implementations are geared toward collecting data at the sensor, transferring it to a central location to be processed, and then analyzing it all there. What we want to do is push the analytics problem out to the edge so that the analytic data feeds can be processed at the edge. Then only the analytics events are transmitted back for central processing, which obviously has a huge impact from a data-scale perspective.

But more importantly, it creates a new opportunity to have the analytic context be very immediate in the field, where the point of occurrence is. So if you are sitting there on a sensor and you are doing analytics on the sensor, not only can you benefit at the sensor, you can send the analytics data back to the central point, where it can be analyzed as well.

Gardner: It’s auspicious, the way that Qlik’s catalog, indexing, and abstracting out the information about where data is approach can now be used really well in an edge environment.


Potter: Most definitely. Our entire data strategy is intricately linked with our architectural strategy in that respect, yes.

Gardner: Analytics and being data-driven across an organization is the way of the future. It makes sense to not cede that core competency of being good at analytics to a cloud provider or to a vendor. The people, process, and tribal knowledge about analytics seems essential.

https://www.qlik.com/us
Do you agree with that, and how does Qlik’s strategy align with keeping the core competency of analytics of, by, and for each and every enterprise?

Potter: Analytics is a specialization organizationally within all of our customers, and that’s not going to go away. What we want to do is parlay that into a broader discussion. So our focus is enabling three key strategies now.

It's about enabling the analytics strategy, as we always have, but broadening the conversation to enabling the data strategy. More importantly, we want to close the organizational, technological, and priority gaps to foster creating an integrated data and analytics strategy.

By doing that, we can create what I describe as a raw-to-ready analytics platform based on trust, because we own the process of the data from source to analysis, and that not only makes the analytics better, it promotes the third part of our strategy, which is around data literacy. That’s about creating a trusted environment in which people can interact with their data and do the analysis that they want to do without having to be data scientists or data experts.

So owning that whole end-to-end architecture is what we are striving to reach.

Gardner: As we have seen in other technology maturation trend curves, applying automation to the problem frees up the larger democratization process. More people can consume these services. How does automation work in the next few years when it comes to analytics? Are we going to start to see more artificial intelligence (AI) applied to the problem?

Automated, intelligent analytics


Potter: Automating those environments is an inevitability, not only from the standpoint of how the data is collected, but in how the data is pushed through a data operations process. More importantly, automating enables on the other end, too, by embedding artificial and machine learning (ML) techniques all the way along that value chain -- from the point of source to the point of consumption.

Gardner: How does AI play a role in the automation and the capability to leverage data across the entire organization?

Potter: How we perform analytics within an analytic system is going to evolve. It’s going to be more conversational in nature, and less about just consuming a dashboard and looking for an insight into a visualization.

The analytics system itself will be an active member of that process, where the conversation is not only with the analytics system but the analytics system itself can initiate the conversation by identifying insights based on context and on other feeds. Those can come from the collective intelligence of the people you work with, or even from people not involved in the process.
The analytics system itself will be an active member of that process, where the conversation is not only with the analytics system but it will initiate the conversation by identifying insights based on context and other feeds.

Gardner: I have been at some events where robotic process automation (RPA) has been a key topic. It seems to me that there is this welling opportunity to use AI with RPA, but it’s a separate track from what's going on with BI, analytics, and the traditional data warehouse approach.

Do you see an opportunity for what’s going on with AI and use of RPA? Can what Qlik is doing with the analytics and data assimilation problem come together with RPA? Would a process be able to leverage analytic information, and vice versa?

Potter: It gets back to the idea of pushing analytics to the edge, because an edge isn’t just a device-level integration. It can be the edge of a process. It can be the edge of not only a human process, but an automated business process. The notion of being able to embed analytics deep into those processes is already being done. Process analytics is an important field.

But the newer idea is that analytics is in service of the process, as opposed to the other way around. The world is getting away from analytics being a separate activity, done by a separate group, and as a separate act. It is as commonplace as getting a text message, right?

Gardner: For the organization to get to that nirvana of total analytics as a common strategy, this needs to be part of what the IT organization is doing, with full stack architecture and evolution. So AIOps and DataOps also getting closer over time.

How does DataOps in your thinking relate to what the larger IT enterprise architects are doing, and why should they be thinking about data more?

Optimizing data pipelines


Potter: That’s a really good question. From my perspective, when I get a chance to talk to data teams, I ask a simple question: “You have this data lake. Is it meeting the analytic requirements of your organization?”

https://www.qlik.com/us
And often I don’t get very good answers. And a big reason why is because what motivates and prioritizes the data team is the storage and management of data, not necessarily the analytics. And often those priorities conflict with the priorities of the analytics team.

What we are trying to do with the Qlik integrated data and analytic strategy is to create data pipelines optimized for analytics, and data operations optimized for analytics. And our investments and our acquisitions in Attunity and Podium are about taking that process and focusing on the raw-to-ready part of the data operations.

Gardner: Mike, we have been talking at a fairly abstract level, but can you share any use cases where leading-edge organizations recognize the intrinsic relationship between DataOps and enterprise architecture? Can you describe some examples or use cases where they get it, and what it gets for them?

Potter: One of our very large enterprise customers deals in medical devices and related products and services. They realized an essential need to have an integrated strategy. And one of the challenges they have, like most organizations, is how to not only overcome the technology part but also the organizational, cultural, and change-management aspects as well.

They recognized the business has a need for data, and IT has data. If you intersect that, how much of that data is actually a good fit? How much data does IT have that isn't needed? How much of the remaining need is unfulfilled by IT? That's the problem we need to close in on.

Gardner: Businesses need to be thinking at the C-suite level about outcomes. Are there some examples where you can tie together such strategic business outcomes back to the total data approach, to using enterprise architecture and DataOps?

Data decision-making, democratized


Potter: The biggest ones center on end-to-end governance of data for analytics, the ability to understand where the data comes from, and building trust in the data inside the organization so that decisions can be made, and those decisions have traceability back to results.

The other aspect of building such an integrated system is a total cost of ownership (TCO) opportunity, because you are no longer expending energy managing data that isn't relevant to adding value to the organization. You can make a lot more intelligent choices about how you use data and how you actually measure the impact that the data can have.

Gardner: On the topic of data literacy, how do you see the behavior of an organization -- the culture of an organization -- shifting? How do we get the chicken-and-egg relationship going between the data services that provide analytics and the consumers to start a virtuous positive adoption pattern?
One of the biggest puzzles a lot of IT organizations face is around adoption and utilization. They build a data lake and they don't know why people aren't using it.

Potter: One of the biggest puzzles a lot of IT organizations face is around adoption and utilization. They build a data lake and they don't know why people aren’t using it.

For me, there are a couple of elements to the problem. One is what I call data elitism. When you think about data literacy and you compare it to literacy in the pre-industrial age, the people who had the books were the people who were rich and had power. So church and state, that kind of thing. It wasn't until technology created, through the printing press, a democratization of literacy that you started to see interesting behavior. Those with the books, those with the power, tried to subvert reading in the general population. They made it illegal. Some argue that the French Revolution was, in part, caused by rising rates of literacy.

If you flash-forward this analogy to today in data literacy, you have the same notion of elitism. Data is only allowed to be accessed by the senior levels of the organization. It can only be controlled by IT.

Ironically, the most data-enabled organizations are typically oriented to the Millennials or younger users. But they are in the wrong part of the organizational chart to actually take advantage of that. They are not allowed to see the data they could use to do their jobs.

The opportunity from a democratization-of-data perspective is understanding the value of data for every individual and allowing that data to be made available in a trusted environment. That’s where this end-to-end process becomes so important.

Gardner: How do we make the economics of analytics an accelerant to that adoption and the democratization of data? I’ll use another historical analogy, the Model T and assembly line. They didn't sell Model Ts nearly to the degree they thought until they paid their own people enough to afford one.

Is there a way of looking at that and saying, “Okay, we need to create an economic environment where analytics is paid for-on-demand, it's fit-for-purpose, it's consumption-oriented.” Wouldn’t that market effect help accelerate the adoption of analytics as a total enterprise cultural activity?

Think positive data culture


Potter: That’s a really interesting thought. The consumerization of analytics is a product of accessibility and of cost. When you build a positive data culture in an organization, data needs to be as readily accessible as email. From that perspective, turning it into a cost model might be a way to accomplish it. It's about a combination of leadership, of just going there and making occur at the grassroots level, where the value it presents is clear.

And, again, I reemphasize this idea of needing a positive data culture.

Gardner: Any added practical advice for organizations? We have been looking at what will be happening and what to anticipate. But what should an enterprise do now to be in an advantageous position to execute a “positive data culture”?

Potter: The simplest advice is to know that technology is not the biggest hurdle; it's change management, culture, and leadership. When you think about the data strategy integrated with the analytics strategy, that means looking at how you are organized and prioritized around that combined strategy.

Finally, when it comes to a data literacy strategy, define how you are going to enable your organization to see data as a positive asset to doing their jobs. The leadership should understand that data translates into value and results. It's a tool, not a weapon.

Gardner: I’m afraid we’ll have to leave it there. You have been listening to a sponsored BriefingsDirect discussion on the impact of dispersed data in a multicloud world. And we have learned about the latest strategies for making the best use of data across an entire organization -- technically, in process terms, as well as culturally.

So a big thank you to our guest, Mike Potter, Chief Technology Officer at Qlik.


Potter: Thank you. It was great to be here.

Gardner: And thank you as well to our audience for joining this BriefingsDirect business intelligence trends discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Qlik-sponsored BriefingsDirect interviews.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Qlik.
 
Transcript of a discussion on why new ways of thinking are demanded if comprehensive analysis of relevant data can become practical across a multi- and hybrid-cloud deployments world. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in: