Thursday, July 11, 2019

For a UK Borough, Solving Security Issues Leads to Operational Improvements and Cost-Savings Across its IT Infrastructure

https://www.barnsley.gov.uk/

Transcript of a discussion on how a large metropolitan borough council in South Yorkshire, England thwarted recurring ransomware attacks but also gained a catalyst to wider infrastructure performance, cost, operations, and management benefits.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Bitdefender

Dana Gardner: Welcome to the next edition of BriefingsDirect. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator.

Gardner
Solving tactical challenges around data center security can often unlock strategic data center operations benefits. For a large metropolitan borough council in South Yorkshire, England, an initial move to thwarting recurring ransomware attacks ended up being a catalyst to wider infrastructure performance, cost, operations, and management benefits.

This next BriefingsDirect security innovations discussion examines how the Barnsley Metropolitan Borough Council Information and Communications Technology (ICT) team rapidly deployed malware protection across 3,500 physical and virtual workstations and servers.

Here to share the story of how one change in security software led to, a year later, far higher levels of user satisfaction and a heightened appreciation for the role and impact of IT is Stephen Furniss, ICT Technical Specialist for Infrastructure at Barnsley Borough Council.


Welcome to BriefingsDirect, Stephen.

Stephen Furniss: Hi, thank you.

Gardner: Stephen, tell us about the Barnsley Metropolitan Borough. You are one of 36 metropolitan counties in England, and you have a population of about 240,000. But tell us more about what your government agencies provide to those citizens.

Simply small, but mighty 

Furniss
Furniss: As a Council, we provide wide-ranging services to all the citizens here, from things like refuse collection on a weekly basis; maintaining roads, potholes, all that kind of stuff, and making sure that we look after the vulnerable in society around here. There is a big raft of things that we have to deliver, and every year we are always challenged to deliver those same services, but actually with less money from central government.

So it does make our job harder, because then there is not just a squeeze across a specific department in the Council when we have these pressures, there is a squeeze across everything, including IT. And I guess one of our challenges has always been how we deliver more or the same standard of service to our end users, with less budget.

So we turn to products that provide single-pane-of-glass interfaces, to make the actual management and configuration of things a lot easier. And [we turn to] things that are more intuitive, that have automation. We try and drive, making everything that we do easier and simpler for us as an IT service.

Gardner: So that boils down to working smarter, not harder. But you need to have the right tools and technology to do that. And you have a fairly small team, 115 or so, supporting 2,800-plus users. And you have to be responsible for all aspects of ICT -- the servers, networks, storage, and, of course, security. How does being a small team impact how you approach security?

Furniss: We are even smaller than that. In IT, we have around 115 people, and that’s the whole of IT. But just in our infrastructure team, we are only 13 people. And our security team is only three or four people.
In IT, we have around 115 people, but just in infrastructure we are only 13 people. It can become a hindrance when you get overwhelmed with security incidents, yet it's great to have  a small team to bond and come up with solutions.

It can become a hindrance when you get overwhelmed with security incidents or issues that need resolving. Yet sometimes it's great to have that small team of people. You can bond together and come up with really good solutions to resolve your issues.

Gardner: Clearly with such a small group you have to be automation-minded to solve problems quickly or your end users will be awfully disappointed. Tell us about your security journey over the past year-and-a-half. What’s changed?

Furniss: A year-and-a-half ago, we were stuck in a different mindset. With our existing security product, every year we went through a process of saying, “Okay, we are up for renewal. Can we get the same product for a cheaper price, or the best price?”

We didn’t think about what security issues we were getting the most, or what were the new technologies coming out, or if there were any new products that mitigate all of these issues and make our jobs -- especially being a smaller team -- a lot easier.

But we had a mindset change about 18 months back. We said, “You know what? We want to make our lives easier. Let’s think about what’s important to us from a security product. What issues have we been having that potentially the new products that are out there can actually mitigate and make our jobs easier, especially with us being a smaller team?”

Gardner: Were reoccurring ransomware attacks the last straw that broke the camel’s back?

Staying a step ahead of security breaches

Furniss: We had been suffering with ransomware attacks. Every couple of years, some user would be duped into clicking on a file, email, or something that would cause chaos and mayhem across the network, infecting file-shares, and not just that individual user’s file-share, but potentially the files across 700 to 800 users all at once. Suddenly they found their files had all been encrypted.

From an IT perspective, we had to restore from the previous backups, which obviously takes time, especially when you start talking about terabytes of data.

https://www.barnsley.gov.uk/
That was certainly one of the major issues we had. And the previous security vendor would come to us and say, “All right, you have this particular version of ransomware. Here are some settings to configure and then you won't get it again.” And that’s great for that particular variant, but it doesn’t help us when the next version or something slightly different shows up, and the security product doesn’t detect it.

That was one of our real worries and pain that we suffered, that every so often we were just going to get hit with ransomware. So we had to change our mindset to want something that’s actually going to be able to do things like machine learning (ML) and have ransomware protection built-in so that we are not in that position. We could actually get on with our day-to-day jobs and be more proactive – rather than being reactive -- in the environment. That’s was a big thing for us.

Also, we need to have a lot of certifications and accreditations, being a government authority, in order to connect back to the central government of the UK for such things as pensions. So there were a lot of security things that would get picked up. The testers would do a penetration test on our network and tell us we needed to think about changing stuff.

Gardner: It sounds like you went from a tactical approach to security to more of an enterprise-wide security mindset. So let's go back to your thought process. You had recurring malware and ransomware issues, you had an audit problem, and you needed to do more with less. Tell us how you went from that point to get to a much better place.

Safe at home, and at work 

Furniss: As a local authority, with any large purchase, usually over 2,500 pounds (US$3,125), we have to go through a tender process. We write in our requirements, what we want from the products, and that goes on a tender website. Companies then bid for the work.

It’s a process I’m not involved in. I am purely involved in the techie side of things, the deployment, and managing and looking after the kit. That tender process is all done separately by our procurement team.

So we pushed out this tender for a new security product that we wanted, and obviously we got responses from various different companies, including Bitdefender. When we do the scoring, we work on the features and functionality required. Some 70 percent of the scoring is based on the features and functionality, with 30 percent based on the cost.

What was really interesting was that Bitdefender scored the highest on all the features and functionalities -- everything that we had put down as a must-have. And when we looked at the actual costs involved -- what they were going to charge us to procure their software and also provide us with deployment with their consultants -- it came out at half of what we were paying for our previous product.
Bitdefender scored the highest on all the features and functionalities -- everything that we had put down as must-have. And the actual costs were half of what we were paying.

So you suddenly step back and you think, “I wish that we had done this a long time ago, because we could have saved money as well as gotten a better product.”

Gardner: Had you been familiar with Bitdefender?

Furniss: Yes, a couple of years ago my wife had some malware on her phone, and we started to look at what we were running on our personal devices at home. And I came up with Bitdefender as one of the best products after I had a really good look around at different options.

I went and bought a family pack, so effectively I deployed Bitdefender at home on my own personal mobile, my wife’s, my kids’, on the tablets, on the computers in the house, and what they used for doing schoolwork. And it’s been great at protecting us from anything. We have never had any issues with an infection or malware or anything like that at home.

It was quite interesting to find out, once we went through the tender process, that it was Bitdefender. I didn’t even know at that stage who was in the running. When the guys told me we are going to be deploying Bitdefender, I was thinking, “Oh, yeah, I use that at home and they are really good.”

Monday, Monday, IT’s here to stay 

Gardner: Stephen, what was the attitude of your end users around their experiences with their workstations, with performance, at that time?

Furniss: We had had big problems with end users’ service desk calls to us. Our previous security product required a weekly scan that would run on the devices. We would scan their entire hard drives every Friday around lunchtime.

You try to identify when the quiet periods are, when you can run an end-user scan on their machine, and we had come up with Friday’s lunchtime. In the Council we can take our lunch between noon and 2 p.m., so we would kick it off at 12 and hope it would finish in time for when users came back and did some work on the devices.

http://www.bitdefender.com/
And with the previous product -- no matter what we did, trying to change dates, trying to change times -- we couldn’t get anything that would work in a quick enough time frame and complete the scans rapidly. It could be running for two to three hours, taking high resources on their devices. A lot of that was down to the spec of the end-user devices not being very good. But, again, when you are constrained with budgets, you can only put so many resources into buying kit for your users.

So, we would end up with service desk calls, with people complaining, saying, “Is there any chance you can change the date and time of the scan? My device is running slow. Can I have a new device?” And so, we received a lot of complaints.

And we also noticed, usually Monday mornings, that we would also have issues. The weekend was when we did our server scans and our full backup. So we would have the two things clashing, causing issues. Monday morning, we would come in expecting those backups to have completed, but because it was trying to fight with the scanning, neither was fully completed. We worried if we were going to be able to recover back to the previous week.

Our backups ended up running longer and longer as the scans took longer. So, yes, it was a bit painful for us in the past.

Gardner: What happened next?

Smooth deployer 

Furniss: Deployment was a really, really good experience. In the past, we have had suppliers come along and provide us a deployment document, some description, and it would be their standard document, there was nothing customized. They wouldn’t speak with us to find out what’s actually deployed and how their product fit in. It was just, “We are going to deploy it like this.” And we would then have issues trying to get things working properly, and we’d have to go backward and forward with a third party to get things resolved.

In this instance, we had Bitdefender’s consultants. They came on-site to see us, and we had a really good meeting. They were asking us questions: “Can you tell us about your environment? Where are your DMZs? What applications have you got deployed? What systems are you using? What hypervisor platforms have you got?” And all of that information was taken into account in the design document that they customized completely to best fit their best practices and what we had in place.

We ended up with something we could deploy ourselves, if we wanted to. We didn’t do that. We took their consultancy as a part of the deployment process. We had the Bitdefender guys on-site for a couple of days working with us to build the proper infrastructure services to run GravityZone.

And it went really well. Nothing was missed from the design. They gave us all the ports and firewall rules needed, and it went really, really smoothly.

https://www.barnsley.gov.uk/

We initially thought we were going to have a problem with deploying out to the clients, but we worked with the consultants to come up with a way around impacting our end-users during the deployment.

One of our big worries was that when you deploy Bitdefender, the first thing it does is see if there is a competitive vendor’s product on the machine. If it finds that, it will remove it, and then restart the user’s device to continue the installation. Now, that was going to be a concern to us.

So we came up with a scripted solution that we pushed out through Microsoft System Center Configuration Manager. We were able to run the uninstall command for the third-party product, and then Bitdefender triggered for the install straightaway. The devices didn’t need rebooting, and it didn’t impact any of our end users at all. They didn’t even know there was anything happening. The only thing that would see is the little icon in the taskbar changing from the previous vendor’s icon to Bitdefender.

It was really smooth. We got the automation to run and push out the client to our end users, and they just didn’t know about it.

Gardner: What was the impact on the servers?

Environmental change for the better 

Furniss: Our server impact has completely changed. The full scanning that Bitdefender does, which might take 15 minutes, is far less time than the two to three hours before on some of the bigger file servers.

And then once it’s done with that full scan, we have it set up to do more frequent quick scans that take about three minutes. The resource utilization of this new scan set up has just totally changed the environment.


Because we use virtualization predominantly across our server infrastructure, we have even deployed the Bitdefender scan servers, which allow us to do separate scans on each of our virtualized server hosts. It does all of the offloading of the scanning of files and malware and that kind of stuff.

It’s a lightweight agent, it takes less memory, less footprint, and less resources. And the scan is offloaded to the scan server that we run.

The impact from a server perspective is that you no longer see spikes in CPU or memory utilization with backups. We don’t have any issues with that kind of thing anymore. It’s really great to see a vendor come up with a solution to issues that people seem to have across the board.

Gardner: Has that impacted your utilization and ability to get the most virtual machines (VMs) per CPU? How has your total costs equation been impacted?

Furniss: The fact that we are not getting all these spikes across the virtualization platform means we can squeeze in more VMs per host without an issue. It means we can get more bank for buck, if you like.

Gardner: When you have a mixed environment -- and I understand you have Nutanix hyperconverged (HCI), Hyper-V and vSphere VMs, some Citrix XenServer, and a mix of desktops -- how does managing such heterogeneity with a common security approach work? It sounds like that could be kind of a mess.

Furniss: You would think it would be a mess. But from my perspective, Bitdefender GravityZone is really good because I have this all on a single pane of glass. It hooks into Microsoft ActiveDirectory, so it pulls back everything in there. I can see all the devices at once. It hooks into our Nutanix HCI environment. I can deploy small scan servers into the environment directly from GravityZone.

If I decide on an additional scan server, it automatically builds that scan server in the virtual environment for me, and it’s another box that we’ve got for scanning everything on the virtual service.
Bitdefender GravityZone is really good because I have this all on a single pane of glass. I can see all the devices at once. I can deploy small scan servers into the environment directly from GravityZone.

It’s nice that it hooks into all these various things. We currently have some legacy VMware. Bitdefender lets me see what’s in that environment. We don’t use the VMware NSX platform, but it gives me visibility across an older platform even as I’m moving to get everything to the Nutanix HCI.

So it makes our jobs easier. The additional patch management module that we have in there, it’s one of the big things for us.

For example, we have always been really good at keeping our Windows updates on devices and servers up to the latest level. But we tended to have problems keeping updates ongoing for all of our third-party apps, such as Adobe Reader, Flash, and Java across all of the devices.

You can get lost as to what is out there unless you do some kind of active scanning across your entire infrastructure, and the Bitdefender patch management allows us to see where we have different versions of apps and updates on client devices. It allows us to patch them up to the latest level and install the latest versions.

From that perspective, I am again using just one pane of glass, but I am getting so much benefit and extra features and functionality than I did previously in the many other products that we use.

Gardner: Stephen, you mentioned a total cost of ownership (TCO) benefit when it comes to server utilization and the increased VMs. Is there another economic metric when it comes to administration? You have a small number of people. Do you see a payback in terms of this administration and integration value?

Furniss: I do. We only have 13 people on the infrastructure team, but only two or three of us actively go into the Bitdefender GravityZone platform. And on a day-to-day basis, we don’t have to do that much. If we deploy a new system, we might have to monitor and see if there is anything that’s needed as an exception if it’s some funky application.

But once our applications are deployed and our servers are up and running, we don’t have to make any real changes. We only have to look at patch levels with third-parties, or to see if there are any issues on our end points and needs our attention.

The actual amount of time we need to be in the Bitdefender console is quite reduced so it’s really useful to us.

Gardner: What’s been the result this last year that you have had Bitdefender running in terms of the main goal -- which is to be free of security concerns?

Proactive infection protection 

Furniss: That’s just been the crux of it. We haven’t had any malware any ransomware attacks on our network. We have not had to spend days, weeks, or hours restoring files back or anything like that -- or rebuilding hundreds of machines because they have something on them. So that’s been a good thing.

Another interesting thing for us, we began looking at the Bitdefender reports from day one. And it had actually found, going back 5, 6, or 7 years, that there was malware or some sort of viruses still out there in our systems.

And the weird thing is, our previous security product had never even seen this stuff. It had obviously let it through to start with. It got through all our filtering and everything, and it was sitting in somebody’s mailbox ready -- if they clicked on it – to launch and infect the entire network.

Straightaway from day one, we were detecting stuff that sat for years in people’s mailboxes. We just didn’t even know about it.

http://www.bitdefender.com/

So, from that perspective, it’s been fantastic. We’ve not had any security outbreaks that we had to deal with, or anything like that.

And just recently, we had our security audit from our penetration testers. One of the things they try to do is actually put some malware on to a test device. They came back and said they had not been able to do that. They have been unable to infect any of our devices. So that’s been a really, really good thing from our perspective.

Gardner: How is that translated into the perception from your end users and your overseers, those people managing your budgets? Has there been a sense of getting more value? What’s the satisfaction quotient, if you will, from your end users?

Furniss: A really good, positive thing has been that they have not come back and said that there’s anything that we’ve lost. There are no complaints about machines being slow.

We even had one of our applications guys say that their machine was running faster than it normally does on Fridays. When we explained that we had swapped out the old version of the security product for Bitdefender, it was like, “Oh, that’s great, keep it up.”
There are no complaints about machines being slow. One of our apps guy says that their machine was running faster than normal. From IT, we are really pleased.

For the people higher up, at the minute, I don’t think they appreciate what we’ve done.  That will come in the next month as we start presenting to them our security reports and the reports from the audit about how they were unable to infect an end-user device.

From our side, from IT, we are really, really pleased with it. We understand what it does and how much it’s saving us from the pains of having to restore files. We are not being seen as one of these councils or entities that’s suddenly plastered across the newspaper and had its reputation tarnished because anyone has suddenly lost all their systems or been infected or whatever.

Gardner: Having a smoothly running organization is the payoff.

Before we close out, what about the future? Where would you like to see your security products go in terms of more intelligence, using data, and getting more of a proactive benefit?

Cloud on the horizon 

Furniss: We are doing a lot more now with virtualization. We have only about 50 physical servers left. We are also thinking about the cloud journey. So we want the security products working with all of that stuff up in the cloud. It’s going to be the next big thing for us. We want to secure that area of our environment if we start moving infrastructure servers up there.

Can we protect stuff up in the cloud as well as what we have here?

Gardner: Yeah, and you mentioned, Stephen, at home that you are using Bitdefender down into your mobile devices, is that also the case with your users in the council, in the governance there or is there a bring your own device benefit or some way that you are looking to allow people to use more of their own devices in context of work? How does that mobile edge work in the future?

Furniss: Well, I don’t know. I think a mobile device is quite costly for councils to actually deploy, but we have taken the approach of -- if you need it for work, then you get one. We currently have got a project to look at deploying the mobile version of Bitdefender to our actual existing Android users.

Gardner: Now that you have 20/20 hindsight with using this type of security environment over the course of a year, any advice for folks in a similar situation?

Furniss: Don’t be scared of change. I think one of the things that always used to worry me was that we knew what we were doing with a particular vendor. We knew what our difficulties were. Are we going to be able to remove it from all the devices?

Don’t worry about that. If you are getting the right product, it’s going to take care of lot of the issues that you currently have. We found that deploying the new product was relatively easy and didn’t cause any pain to our end-users. It was seamless. They didn’t even know we had done it.

Some people might be thinking that they have a massive estate and it’s going to be a real headache. But with automation and a bit of thinking about how and what are you going to do, it’s fairly straightforward to deploy a new antivirus product to your end users. Don’t be afraid of change and moving into something new. Get the best use of the new products that there are out there.

Gardner: I’m afraid we will have to leave it there. You have been listening to a sponsored BriefingsDirect discussion on how a large metropolitan borough in South Yorkshire, England solved a recurring ransomware attack problem -- but along the way gained wider infrastructure performance, cost benefits, operational benefits, and a happier overall organization.

Please join me in thanking our guest, Stephen Furniss, ICT Technical Specialist for Infrastructure at Barnsley Metropolitan Borough Council. Thank you so much, Stephen.

Furniss: Thanks for having me.

Gardner: I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing series of BriefingsDirect use case discussions. A big thank you also to our sponsor, Bitdefender, for supporting these presentations.


Lastly, thanks to our audience for joining. Please pass this along to your IT community and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Bitdefender

Transcript of a discussion on how a large metropolitan borough council in South Yorkshire, England thwarted recurring ransomware attacks but also provided a catalyst to wider infrastructure performance, cost, operations, and management benefits. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in:

Monday, July 08, 2019

Qlik’s Top Researcher Describes New Ways for Human Cognition and Augmented Intelligence to Join Forces

https://www.qlik.com/us

Transcript of a discussion on how the latest research and products bring the power of people and machine intelligence closer together to make analytics consumable across more business processes.
 
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Qlik

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Our next business intelligence (BI) trends discussion explores the latest research and products that bring the power of people and machine intelligence closer together.

Gardner
As more data becomes available to support augmented intelligence -- and the power of analytics platforms increasingly goes to where the data is -- the next stage of value is in how people can interact with the results.

Stay with us now as we examine the latest strategies for not only visualizing data-driven insights but making them conversational and even presented through a form of storytelling.

To learn more about making the consumption and refinement of analytics delivery an interactive exploit open to more types of users, we are now joined by Elif Tutuk, Head of Research at Qlik. Welcome to BriefingsDirect.

Elif Tutuk: Thank you. It’s a great pleasure to be here.


Gardner: Strides have been made in recent years for better accessing data and making it available to analytics platforms, but the democratization of the results and making insights consumable by more people is just beginning. What are the top technical and human interaction developments that will broaden the way that people interact differently with analytics?

Trusted data for all


Tutuk: That’s a great question. We are doing a lot of research in this area in terms of creating new user experiences where we can bring about more data literacy and help improve people’s understanding of reading, analyzing, and arguing with the data.

Tutuk
In terms of the user experience, a conversational aspect has a big impact. But we also believe that it’s not only through the conversation, especially when you want to understand data. The visual exploration part should also be there. We are creating experiences that combine the unique nature, language, and visual exploration capabilities of a human. We think that it is the key to building a good collaboration between the human and the machine.

Gardner: As a result, are we able to increase the number and types of people impacted by data by going directly to them -- rather than through a data scientist or an IT department? How are the interaction elements broadening this to a wider clientele?

Tutuk: The idea is to make analysis available from C-level users to the business end users.

If you want to broaden the use of analytics and lower the barrier, you also need to make sure that the data machines and the system are governed and trusted.

Our enterprise data management strategy therefore becomes important for our Cognitive Engine technology. We are combining those two so that the machines use a governed data source to provide trusted information.

Gardner: What strikes me as quite new now is more interaction between human cognition and augmented intelligence. It’s almost a dance. It creates new types of insights, and new and interesting things can happen.

How do you attain the right balance in the interactions between human cognition and AI?

Tutuk: It is about creating experiences between what the human is good at -- perception, awareness, and ultimately decision-making -- and what the machine technology is good at, such as running algorithms on large amounts of data.

As the machine serves insights to the user, it needs to first create trust about what data is used and the context around it. Without the context you cannot really take that insight and make an action on it. And this is where the human part comes in, because as humans you have the intuition and the business knowledge to understand the context of the insight. Then you can explore it further by being augmented. Our vision is for making decisions by leveraging that [machine-generated] insight.

Gardner: In addition to the interactions, we are hearing about the notion of storytelling. How does that play a role in ways that people get better analytics outcomes?

Storytelling insights support


Tutuk: We have been doing a lot of research and thinking in this area because today, in the analytics market, AI is becoming robust. These technologies are developing very well. But the challenge is that most of the technologies provide results like a black box. As a user, you don’t know why the machine is making a suggestion and insight. And that creates a big trust issue.

To have greater adoption of the AI results, you need to create an experience that builds trust, and that is why we are looking at one of the most effective and timeless forms of communication that humans use, which is storytelling.
To have greater adoption of the AI results, you need to create an experience that builds trust, and that is why we are looking at one of the most effective and timeless forms of communication that humans use, which is storytelling.

So we are creating unique experiences where the machine generates an insight. And then, on the fly, we create data stories generated by the machine, thereby providing more context. As a user, you can have a great narrative, but then that narrative is expanded with insightful visualizations. From there, based on what you gain from the story, we are also looking at capabilities where you can explore further.

And in that third step you are still being augmented, but able to explore. It is user-driven. That is where you start introducing human intuition as well.

And when you think about the machine first surfacing insights, then getting more context with the data story, and lastly going to exploration -- all three phases can be tied together in a seamless flow. You don’t lose the trust of the human. The context becomes really important. And you should be able to carry the context between all of the stages so that the user knows what the context is. Adding the human intuition expands that context.

Gardner: I really find this fascinating because we are talking not just about problem-solution, we are talking about problem-solution-resolution, then readjusting and examining the problem for even more solution and resolution. We are also now, of course, in the era of augmented reality, where we can bring these types of data analysis outputs to people on a factory floor, wearing different types of visual and audio cue devices.

So the combination of augmented reality, augmented intelligence, storytelling, and bringing it out to the field strikes me as something really unprecedented. Is that the case? Are we charting an entirely new course here?

Tutuk: Yes, I think so. It’s an exciting time for us. I am glad that you pointed out the augmented reality because it’s another research area that we are looking at. One of the research projects we have done augments people on retail store floors, the employees.

The idea is, if you are trying to do shelf arrangement, for example, we can provide them information -- right when they look at the product – about that product and what other products are being sold together. Then, right away at that moment, they are being augmented and they will make a decision. It’s an extremely exciting time for us, yes.

Gardner: It throws the idea of batch-processing out the window. You used to have to run the data, come up with report, and then adjust your inventory. This gets directly to the interaction with the end-consumer in mind and allows for entirely new types of insights and value.

https://www.qlik.com/us
Tutuk: As part of that project, we also allow for being able to pin things on the space. So imagine that you are in a warehouse, looking at a product, and you develop an interesting insight. Now you can just pin it on the space on that product. And as you do that on different products, you can take a step back, take a look, and discover different insights on the product.

The idea is having a tray that you carry with you, like your own analytics coming with you, and when you find something interesting that matches with the tray – with, for example, the product that you are looking at -- you can pin it. It’s like having a virtual board with products and with the analytics being augmented reality.

Gardner: We shouldn’t lose track that we are often talking about billions of rows of data supporting this type of activity, and that new data sets can be brought to bear on a problem very rapidly.

Putting data in context with AI2


Tutuk: Exactly, and this is where our Associative Big Data Index technology comes into play. We are bringing the power of our unique associative engine to massive datasets. And, of course, with the latest acquisition that we have done with Attunity, we gain data streaming and real-time analytics.

Gardner: Digging down to the architecture to better understand how it works, the Qlik cognitive engine increasingly works with context awareness. I have heard this referred to as AI2. What do you all mean by AI2?

Tutuk: AI2 is augmented intelligence powered by an associative index. So augmented intelligence is our vision for the use of artificial intelligence, where the goal is to augment the human, not to replace them. And now we are making sure that we have the unique component in terms of our associative index as well.

Allow me to explain the advantage of the associative index. One of the challenges for using AI and machine learning is bias. The system has bias because it doesn’t have access to all of the data.
With the associative index, our technology provides a system with visibility to all of the data at any point, including the data that is associated with your context, and also what's not associated. That part provides a good learning source for the algorithms that we are using.

For example, you maybe are trying to make a prediction for churn analysis in the western sales region. Normally if you select the west region the system -- if the AI is running with a SQL or relational database -- it will only have access to that slice of data. It will never have the chance to learn what is not associated, such as the customers from the other regions, to look at their behavior.

With the associative index, our technology provides a system with visibility to all of the data at any point, including the data that is associated with your context, and also what’s not associated. And that part that is not associated provides a good learning source for the algorithms that we are using. This is where we are differentiating ourselves and providing unique insights to our users that will be very hard to get with an AI tool that works only with SQL and relational data structures.

Gardner: Not only is Qlik is working on such next-generation architectures, you are also undertaking a larger learning process with the Data Literacy Program to, in a sense, make the audience more receptive to the technology and its power.

Please explain, as we move through this process of making intelligence accessible and actionable, how we can also make democratization of analytics possible through education and culturally rethinking the process.

Data literacy drives cognitive engine


Tutuk: Data literacy is important to help make people able to read, analyze, and argue with the data. We have an open program -- so you don’t have to be a Qlik customer. It’s now available. Our goal is to make everyone data literate. And through that program you can firstly understand the data literacy level of your organization. We have some free tests you can take, and then based on that need we have materials to help people to become data literate.


As we build the technology, our vision with AI is to make the analytics platform much easier to use in a trusted way. So that’s why our vision is not only focused on prescriptive probabilities, it’s focused on the whole analytics workflow -- from data acquisition, to visualization, exploration, and sharing. You should always be augmented by the system.

We are at just the beginning of our cognitive framework journey. We introduced Qlik Cognitive Engine last year, and since then we have exposed more features from the framework in different parts of the product, such as on the data preparation. Our users, for example, get suggestions on the best way of associating data coming from different data sources.

And, of course, on the visualization part and dashboarding, we have visual insights, where the Cognitive Engine right away suggests insights. And now we are adding natural language capabilities on top of that, so you can literally conversationally interact with the data. More things will be coming on that.

https://community.qlik.com/t5/Qlik-Product-Innovation-Blog/Qlik-Insight-Bot-an-AI-powered-bot-for-conversational-analytics/ba-p/1555552
Gardner: As an interviewer, as you can imagine, I am very fond of the Socratic process of questioning and then reexamining. It strikes me that what you are doing with storytelling is similar to a Socratic learning process. You had an acquisition recently that led to the Qlik Insight Bot, which to me is like interviewing your data analysis universe, and then being able to continue to query, and generate newer types of responses.

Tell us about how the Qlik Insight Bot works and why that back-and-forth interaction process is so powerful.

Tutuk: We believe any experiences you have with the system should be in the form of a conversation, it should have a conversational nature. There’s a unique thing about human-to-human conversation – just as we are having this conversation. I know that we are talking about AI and analytics. You don’t have to tell me that as we are talking. We know we are having a conversation about that.

That is exactly what we have achieved with the Qlik Insight Bot technology. As you ask questions to the Qlik Insight Bot, it is keeping track of the context. You don’t have to reiterate the context and ask the question with the context. And that is also a unique differentiator when you compare that experience to just having a search box, because when you use Google, it doesn’t, for example, keep the context. So that’s one of the important things for us to be able to keep -- to have a conversation that allows the system to keep the context.

Gardner: Moving to the practical world of businesses today, we see a lot of use of Slack and Microsoft Teams. As people are using these to collaborate and organize work, it seems to me that presents an opportunity to bring in some of this human-level cognitive interaction and conversational storytelling.

Do you have any examples of organizations implementing this with things like Slack and Teams?

Collaborate to improve processes


Tutuk: You are on the right track. The goal is to provide insights wherever and however you work. And, as you know, there is a big trend in terms of collaboration. People are using Slack instead of just emailing, right?

So, the Qlik Insight Bot is available with an integration to Microsoft Teams, Slack, and Skype. We know this is where the conversations are happening. If you are having a conversation with a colleague on Slack and neither of the parties know the answer, then right away they can just continue their conversation by including Qlik Insight Bot and be powered with the Cognitive Engine insights that they can make decisions with right away.

Gardner: Before we close out, let’s look to the future. Where do you take this next, particularly in regard to process? We also hear a lot these days about robotic process automation (RPA). There is a lot of AI being applied to how processes can be improved and allowing people to do what they do best.
The Qlik insight Bot is available with an integration to Microsoft Teams, Slack, and Skype. We know this is where the conversations are happening. They can just continue their conversation by including the Qlik Insight Bot and be powered with the Cognitive Engine insights that they can make decisions with.

Do you see an opportunity for the RPA side of AI and what you are all doing with augmented intelligence and the human cognitive interactions somehow reinforcing one another?

Tutuk: We realized with RPA processes that there are challenges with the data there as well. It’s not only about the human and the interaction of the human with the automation. Every process automation generates data. And one of the things that I believe is missing right now is to have a full view on the full automation process. You may have 65 different robots automating different parts of a process, but how do you provide the human a 360-degree view of how the process is performing overall?

A platform can gather associated data from different robots and then provide the human a 360-degree view of what’s going on in the processes. Then that human can make decisions, again, because as humans we are very good at making decisions by seeing nonlinear connections. Feeding the right data to us to be able to use that capability is very important, and our platform provides that.

Gardner: Elif, for organizations looking to take advantage of all of this, what should they be doing now to get ready? To set the foundation, build the right environment, what should enterprises be doing to be in the best position to leverage and exploit these capabilities in the coming years?

Replace repetitive processes


Tutuk: Look for the processes that are repetitive. Those aren’t the right places to use unique human capabilities. Determine those repetitive processes and start to replace them with machines and automation.

Then make sure that whatever data that they are feeding into this is trustable and comes from a governed environment. The data generated by those processes should be governed as well. So have a governance mechanism around those processes.

I also believe there will be new opportunities for new jobs and new ideas that the humans will be able to start doing. We are at an exciting new era. It’s a good time to find the right places to use human intelligence and creativity just as more automation will happen for repetitive tasks. It’s an incredible and exciting time. It will be great.

Gardner: These strike me as some of the most powerful tools ever created in human history, up there with first wheel and other things that transformed our existence and our quality of life. It is very exciting.

I’m afraid we will have to leave it there. You have been listening to a sponsored BriefingsDirect discussion on the latest research and products that bring the power of people and augmented intelligence closer than ever.

And we have learned about strategies for not only visualizing data-driven insights but making them conversational -- and even presented through storytelling. So a big thank you to our guest, Elif Tutuk, Head of Research at Qlik. Thank you very much.

Tutuk: Thank you very much.


Gardner: And a big thank you to our audience as well for joining this BriefingsDirect business intelligence trends discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Qlik-sponsored BriefingsDirect interviews.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Qlik.
 
Transcript of a discussion on how the latest research and products bring the power of people and machine intelligence closer together to make analytics consumable across more business processes. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in: