Tuesday, November 01, 2016

2016 Campaigners Look to Deep Big Data Analysis and Querying to Gain an Edge in Reaching Voters

Transcript of a discussion on how data analysis services startup BlueLabs in Washington helps presidential campaigns better know and engage with potential voters.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Welcome to the next edition of the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on business digital transformation. Stay with us now to learn how agile companies are fending off disruption in favor of innovation.

Gardner
Our next case study explores how data-analysis services startup BlueLabs in Washington, D.C. helps presidential campaigns better know and engage with potential voters.

We'll learn how BlueLabs relies on analytics platforms that allow a democratization of querying, of opening the value of vast big data resources to more of those in the need to know.

In this example of helping organizations work smarter by leveraging innovative statistical methods and technology, we'll discover how specific types of voters can be identified and reached.

Here to describe how big data is being used creatively by contemporary political organizations for two-way voter engagement, we're joined by Erek Dyskant Co-Founder and Vice President of Impact at BlueLabs Analytics in Washington. Welcome, Erek.
Join myVertica
To Get the Free
HPE Vertica Community Edition
Erek Dyskant: I'm so happy to be here, thanks for having me.

Gardner: Obviously, this is a busy season for the analytics people who are focused on politics and campaigns. What are some of the trends that are different in 2016 from just four years ago. It’s a fast-changing technology set, it's also a fast-changing methodology. And of course, the trends about how voters think, react, use social, and engage are also dynamic. So what's different this cycle?

Dyskant: From a voter-engagement perspective, in 2012, we could reach most of our voters online through a relatively small set of social media channels -- Facebook, Twitter, and a little bit on the Instagram side. Moving into 2016, we see a fragmentation of the online and offline media consumption landscape and many more folks moving toward purpose-built social media platforms.

If I'm at the HPE Conference and I want my colleagues back in D.C. to see what I'm seeing, then maybe I'll use Periscope, maybe Facebook Live, but probably Periscope. If I see something that I think one of my friends will think is really funny, I'll send that to them on Snapchat.

Where political campaigns have traditionally broadcast messages out through the news-feed style social-media strategies, now we need to consider how it is that one-to-one social media is acting as a force multiplier for our events and for the ideas of our candidates, filtered through our campaign’s champions.

Gardner: So, perhaps a way to look at that is that you're no longer focused on precincts physically and you're no longer able to use broadcast through social media. It’s much more of an influence within communities and identifying those communities in a new way through these apps, perhaps more than platforms.

Social media

Dyskant: That's exactly right. Campaigns have always organized voters at the door and on the phone. Now, we think of one more way. If you want to be a champion for a candidate, you can be a champion by knocking on doors for us, by making phone calls, or by making phone calls through online platforms.

You can also use one-to-one social media channels to let your friends know why the election matters so much to you and why they should turn out and vote, or vote for the issues that really matter to you.

Gardner: So, we're talking about retail campaigning, but it's a bit more virtual. What’s interesting though is that you can get a lot more data through the interaction than you might if you were physically knocking on someone's door.

Dyskant: The data is different. We're starting to see a shift from demographic targeting. In 2000, we were targeting on precincts. A little bit later, we were targeting on combinations of demographics, on soccer moms, on single women, on single men, on rural, urban, or suburban communities separately.

Dyskant
Moving to 2012, we've looked at everything that we knew about a person and built individual-level predictive models, so that we knew each person's individual set of characteristics made that person more or less likely to be someone that our candidate would have an engaging conversation through a volunteer.

Now, what we're starting to see is behavioral characteristics trumping demographic or even consumer data. You can put whiskey drinkers in your model, you can put cat owners in your model, but isn't it a lot more interesting to put in your model that fact that this person has an online profile on our website and this is their clickstream? Isn't it much more interesting to put into a model that this person is likely to consume media via TV, is likely to be a cord-cutter, is likely to be a social media trendsetter, is likely to view multiple channels, or to use both Facebook and media on TV?

That lets us have a really broad reach or really broad set of interested voters, rather than just creating an echo chamber where we're talking to the same voters across different platforms.

Gardner: So, over time, the analytics tools have gone from semi-blunt instruments to much more precise, and you're also able to better target what you think would be the right voter for you to get the right message out to.

One of the things you mentioned that struck me is the word "predictive." I suppose I think of campaigning as looking to influence people, and that polling then tries to predict what will happen as a result. Is there somewhat less daylight between these two than I am thinking, that being predictive and campaigning are much more closely associated, and how would that work?

Predictive modeling

Dyskant: When I think of predictive modeling, what I think of is predicting something that the campaign doesn't know. That may be something that will happen in the future or it may be something that already exists today, but that we don't have an observation for it.

In the case of the role of polling, what I really see about that is understanding what issues matter the most to voters and how it is that we can craft messages that resonate with those issues. When I think of predictive analytics, I think of how is it that we allocate our resources to persuade and activate voters.

Over the course of elections, what we've seen is an exponential trajectory of the amount of data that is considered by predictive models. Even more important than that is an exponential set of the use cases of models. Today, we see every time a predictive model is used, it’s used in a million and one ways, whereas in 2012 it might have been used in 50, 20, or 100 sessions about each voter contract.

Gardner: It’s a fascinating use case to see how analytics and data can be brought to bear on the democratic process and to help you get messages out, probably in a way that's better received by the voter or the prospective voter, like in a retail or commercial environment. You don’t want to hear things that aren’t relevant to you, and when people do make an effort to provide you with information that's useful or that helps you make a decision, you benefit and you respect and even admire and enjoy it.

Dyskant: What I really want is for the voter experience to be as transparent and easy as possible, that campaigns reach out to me around the same time that I'm seeking information about who I'm going to vote for in November. I know who I'm voting for in 2016, but in some local actions, I may not have made that decision yet. So, I want a steady stream of information to be reaching voters, as they're in those key decision points, with messaging that really is relevant to their lives.
I want a steady stream of information to be reaching voters, as they're in those key decision points, with messaging that really is relevant to their lives.

I also want to listen to what voters tell me. If a voter has a conversation with a volunteer at the door, that should inform future communications. If somebody has told me that they're definitely voting for the candidate, then the next conversation should be different from someone who says, "I work in energy. I really want to know more about the Secretary’s energy policies."

Gardner: Just as if a salesperson is engaging with process, they use customer relationship management (CRM), and that data is captured, analyzed, and shared. That becomes a much better process for both the buyer and the seller. It's the same thing in a campaign, right? The better information you have, the more likely you're going to be able to serve that user, that voter.

Dyskant: There definitely are parallels to marketing, and that’s how we at BlueLabs decided to found the company and work across industries. We work with Fortune 100 retail organizations that are interested in how, once someone buys one item, we can bring them back into the store to buy the follow-on item or maybe to buy the follow-on item through that same store’s online portal. How it is that we can provide relevant messaging as users engage in complex processes online? All those things are driven from our lessons in politics.

Politics is fundamentally different from retail, though. It's a civic decision, rather than an individual-level decision. I always want to be mindful that I have a duty to voters to provide extremely relevant information to them, so that they can be engaged in the civic decision that they need to make.

Gardner: Suffice it to say that good quality comparison shopping is still good quality comparison decision-making.

Dyskant: Yes, I would agree with you.

Relevant and speedy

Gardner: Now that we've established how really relevant, important, and powerful this type of analysis can be in the context of the 2016 campaign, I'd like to learn more about how you go about getting that analysis and making it relevant and speedy across large variety of data sets and content sets. But first, let’s hear more about BlueLabs. Tell me about your company, how it started, why you started it, maybe a bit about yourself as well.

Dyskant: Of the four of us who started BlueLabs, some of us met in the 2008 elections and some of us met during the 2010 midterms working at the Democratic National Committee (DNC). Throughout that pre-2012 experience, we had the opportunity as practitioners to try a lot of things, sometimes just once or twice, sometimes things that we operationalized within those cycles.

Jumping forward to 2012 we had the opportunity to scale all that research and development to say that we did this one thing that was a different way of building models, and it worked for in this congressional array. We decided to make this three people’s full-time jobs and scale that up.

Moving past 2012, we got to build potentially one of the fastest-growing startups, one of the most data-driven organizations, and we knew that we built a special team. We wanted to continue working together with ourselves and the folks who we worked with and who made all this possible. We also wanted to apply the same types of techniques to other areas of social impact and other areas of commerce. This individual-level approach to identifying conversations is something that we found unique in the marketplace. We wanted to expand on that.
Join myVertica
To Get the Free
HPE Vertica Community Edition
Increasingly, what we're working on is this segmentation-of-media problem. It's this idea that some people watch only TV, and you can't ignore a TV. It has lots of eyeballs. Some people watch only digital and some people consume a mix of media. How is it that you can build media plans that are aware of people's cross-channel media preferences and reach the right audience with their preferred means of communications?

Gardner: That’s fascinating. You start with the rigors of the demands of a political campaign, but then you can apply in so many ways, answering the types of questions anticipating the type of questions that more verticals, more sectors, and charitable organizations would want to be involved with. That’s very cool.

Let’s go back to the data science. You have this vast pool of data. You have a snappy analytics platform to work with. But, one of the things that I am interested in is how you get more people whether it's in your organization or a campaign, like the Hillary Clinton campaign, or the DNC to then be able to utilize that data to get to these inferences, get to these insights that you want.

What is it that you look for and what is it that you've been able to do in that form of getting more people able to query and utilize the data?

Dyskant: Data science happens when individuals have direct access to ask complex questions of a large, gnarly, but well-integrated data set. If I have 30 terabytes of data across online contacts, off-line contacts, and maybe a sample of clickstream data, and I want to ask things like of all the people who went to my online platform and clicked the password reset because they couldn't remember their password, then never followed up with an e-mail, how many of them showed up at a retail location within the next five days? They tried to engage online, and it didn't work out for them. I want to know whether we're losing them or are they showing up in person.

That type of question maybe would make it into a business-intelligence (BI) report a few months from that, but people who are thinking about what we do every day, would say, "I wonder about this, turn it into a query, and say, "I think I found something." If we give these customers phone calls, maybe we can reset their passwords over the phone and reengage them.

Human intensive

That's just one tiny, micro example, which is why data science is truly a human-intensive exercise. You get 50-100 people working at an enterprise solving problems like that and what you ultimately get is a positive feedback loop of self-correcting systems. Every time there's a problem, somebody is thinking about how that problem is represented in the data. How do I quantify that. If it’s significant enough, then how is it that the organization can improve in this one specific area?

All that can be done with business logic is the interesting piece. You need very granular data that's accessible via query and you need reasonably fast query time, because you can’t ask questions like that when you're going to get coffee every time you run a query.

Layering predictive modeling allows you to understand the opportunity for impact if you fix that problem. That one hypothesis with those users who cannot reset their passwords is that maybe those users aren't that engaged in the first place. You fix their password but it doesn’t move the needle.

The other hypothesis is that it's people who are actively trying to engage with your server and are unsuccessful because of this one very specific barrier. If you have a model of user engagement at an individual level, you can say that these are really high-value users that are having this problem, or maybe they aren’t. So you take data science, align it with really smart individual-level business analysis, and what you get is an organization that continues to improve without having to have at an executive-decision level for each one of those things.

Gardner: So a great deal of inquiry experimentation, iterative improvement, and feedback loops can all come together very powerfully. I'm all for the data scientist full-employment movement, but we need to do more than have people have to go through data scientist to use, access, and develop these feedback insights. What is it about the SQL, natural language, or APIs? What is it that you like to see that allows for more people to be able to directly relate and engage with these powerful data sets?
It's taking that hypothesis that’s driven from personal stories, and being able to, through a relatively simple query, translate that into a database query, and find out if that hypothesis proves true at scale.

Dyskant: One of the things is the product management of data schemas. So whenever we build an analytics database for a large-scale organization I think a lot about an analyst who is 22, knows VLOOKUP, took some statistics classes in college, and has some personal stories about the industry that they're working in. They know, "My grandmother isn't a native English speaker, and this is how she would use this website."

So it's taking that hypothesis that’s driven from personal stories, and being able to, through a relatively simple query, translate that into a database query, and find out if that hypothesis proves true at scale.

Then, potentially take the result of that query, dump them into a statistical-analysis language, or use database analytics to answer that in a more robust way. What that means is that our schemas favor very wide schemas, because I want someone to be able to write a three-line SQL statement, no joins, that enters a business question that I wouldn't have thought to put in a report. So that’s the first line -- is analyst-friendly schemas that are accessed via SQL.

The next line is deep key performance indicators (KPIs). Once we step out of the analytics database, consumers drop into the wider organization that’s consuming data at a different level. I always want reporting to report on opportunity for impact, to report on whether we're reaching our most valuable customers, not how many customers are we reaching.

"Are we reaching our most valuable customers" is much more easily addressable; you just talk to different people. Whereas, when you ask, "Are we reaching enough customers," I don’t know how find out. I can go over to the sales team and yell at them to work harder, but ultimately, I want our reporting to facilitate smarter working, which means incorporating model scores and predictive analytics into our KPIs.

Getting to the core

Gardner: Let’s step back from the edge, where we engage the analysts, to the core, where we need to provide the ability for them to do what they want and which gets them those great results.

It seems to me that when you're dealing in a campaign cycle that is very spiky, you have a short period of time where there's a need for a tremendous amount of data, but that could quickly go down between cycles of an election, or in a retail environment, be very intensive leading up to a holiday season.

Do you therefore take advantage of the cloud models for your analytics that make a fit-for-purpose approach to data and analytics pay as you go? Tell us a little bit about your strategy for the data and the analytics engine.

Dyskant: All of our customers have a cyclical nature to them. I think that almost every business is cyclical, just some more than others. Horizontal scaling is incredibly important to us. It would be very difficult for us to do what we do without using a cloud model such as Amazon Web Services (AWS).

Also, one of the things that works well for us with HPE Vertica is the licensing model where we can add additional performance with only the cost of hardware or hardware provision through the cloud. That allows us to scale up our cost areas during the busy season. We'll sometimes even scale them back down during slower periods so that we can have those 150 analysts asking their own questions about the areas of the program that they're responsible for during busy cycles, and then during less busy cycles, scale down the footprint of the operation.
I do everything I can to avoid aggregation. I want my analysts to be looking at the data at the interaction-by-interaction level.

Gardner: Is there anything else about the HPE Vertica OnDemand platform that benefits your particular need for analysis? I'm thinking about the scale and the rows. You must have so many variables when it comes to a retail situation, a commercial situation, where you're trying to really understand that consumer?

Dyskant: I do everything I can to avoid aggregation. I want my analysts to be looking at the data at the interaction-by-interaction level. If it’s a website, I want them to be looking at clickstream data. If it's a retail organization, I want them to be looking at point-of-sale data. In order to do that, we build data sets that are very frequently in the billions of rows. They're also very frequently incredibly wide, because we don't just want to know every transaction with this dollar amount. We want to know things like what the variables were, and where that store was located.

Getting back to the idea that we want our queries to be dead-simple, that means that we very frequently append additional columns on to our transaction tables. We’re okay that the table is big, because in a columnar model, we can pick out just the columns that we want for that particular query.

Then, moving into some of the in-database machine-learning algorithms allows us to perform more higher-order computation within the database and have less data shipping.

Gardner: We're almost out of time, but I wanted to do some predictive analysis ourselves. Thinking about the next election cycle, midterms, only two years away, what might change between now and then? We hear so much about machine learning, bots, and advanced algorithms. How do you predict, Erek, the way that big data will come to bear on the next election cycle?

Behavioral targeting

Dyskant: I think that a big piece of the next election will be around moving even more away from demographic targeting, toward even more behavioral targeting. How is it that we reach every voter based on what they're telling us about them and what matters to them, how that matters to them? That will increasingly drive our models.

To do that involves probably another 10X scale in the data, because that type of data is generally at the clickstream level, generally at the interaction-by-interaction level, incorporating things like Twitter feeds, which adds an additional level of complexity and laying in computational necessity to the data.

Gardner: It almost sounds like you're shooting for sentiment analysis on an issue-by-issue basis, a very complex undertaking, but it could be very powerful.

Dyskant: I think that it's heading in that direction, yes.

Gardner: I am afraid we'll have to leave it there. We've been exploring how data analysis services startup BlueLabs in Washington, DC helps presidential campaigns better know and engage with potential voters. And we've learned how organizations are working smarter by leveraging innovative statistical methods and technologies, and in this case, looking at two-way voter engagement in entirely new ways -- in this and in future election cycles.
Join myVertica
To Get the Free
HPE Vertica Community Edition
So, please join me in thanking our guest, Erek Dyskant, Co-Founder and Vice President of Impact at BlueLabs in Washington. Thank you, Erek.

Dyskant: Thank you.

Gardner: And a big thank you as well to our audience for joining us for this Hewlett Packard Enterprise Voice of the Customer digital transformation discussion.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored interviews. Thanks again for listening, and please come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how data analysis services startup BlueLabs in Washington helps presidential campaigns better know and engage with potential voters. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Tuesday, October 18, 2016

How Governments Gain Economic Benefits from Inter-Public Cloud Interoperability and Standardization

Transcript of a panel discussion with members of The Open Group on the latest developments in eGovernment and cloud adoption.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hello, and welcome to a special BriefingsDirect Thought Leadership Panel Discussion coming to you in conjunction with The Open Group Paris Event and Member Meeting October 24 through 27, 2016 in France.

Gardner
Given that the Paris event has a focus on the latest developments in eGovernment, our panel will now explore how public-sector organizations can gain economic benefits from cloud interoperability and standardization.

As government agencies move to the public cloud computing model, the use of more than one public cloud provider can offer economic benefits by a competition and choice. But are the public clouds standardized efficiently for true interoperability, and can the large government contracts in the offing for cloud providers have an impact on the level of maturity around standardization?

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host and moderator as we examine how to best procure multiple cloud services as eGovernment services at low risk and high reward.

With that, please join me now in welcoming our panel, Dr. Chris Harding, Director for Interoperability at The Open Group. Welcome, Chris.
Register for
The Open Group Event
Next in Your Region
Harding: Thank you, Dana. It's great to be in this podcast.

Gardner: We're here also with Dave Linthicum, Senior Vice President at Cloud Technology Partners. Welcome, Dave.

Linthicum: Thank you very much, Dana.

Gardner: And lastly, we're here with Andras Szakal, Vice President and Chief Technology Officer at IBM U.S. Federal. Welcome, Andras.

Szakal: Thank you for having me.

Gardner: Andras, let's start with you. I've spoken to some people in the lead-up to this discussion about the level of government-sector adoption of cloud services, especially public cloud. They tell me that it’s lagging the private sector. Is that what you're encountering, that the public sector is lagging the private sector, or is it more complicated than that?

Szakal
Szakal: It's a bit more complicated than that. The public sector born-on-the-cloud adoption is probably much greater than the public sector and it differentiates. So the industry at large, from a born-on-the-cloud point of view is very much ahead of the public-sector government implementation of born-on-the-cloud applications.

What really drove that was innovations like the Internet of Things (IoT), gaming systems, and platforms, whereas the government environment really was more about taking existing government citizens to government shared services and so on and so forth and putting them into the cloud environment.

When you're talking about public cloud, you have to be very specific about the public sector and government, because most governments have their own industry instance of their cloud. In the federal government space, they're acutely aware of the FedRAMP certified public-cloud environments. That can go from moderate risk, where you can have access to the yummy goodness of the entire cloud industry, but then, to FedRAMP High, which would isolate these clouds into their own environments in order to increase the level of protection and lower the risk to the government.

So, the cloud service provider (CSP) created instances of these commercial clouds fit-for-purpose for the federal government. In that case, if we're talking about enterprise applications shifting to the cloud, we're seeing the public sector government side, at the national level, move very rapidly, compared to some of the commercial enterprises who are more leery about what the implications of that movement may be over a period of time. There isn't anybody that's mandating that they do that by law, whereas that is the case on the government side.

Attracting contracts

Gardner: Dave, it seems that if I were a public cloud provider, I couldn't think of a better customer, a better account in terms of size and longevity, than some major government agencies. What are we seeing from the cloud providers in trying to attract the government contracts and perhaps provide the level of interoperability and standardization that they require?

Linthicum: The big three -- Amazon, Google and Microsoft -- are really making an effort to get into that market. They all have federal sides to their house. People are selling into that space right now, and I think that they're seeing some progress. The FAA and certainly the DoD have been moving in that direction.

Linthicum
However, they do realize that they have to build a net new infrastructure, a net new way of doing procurement to get into that space. In the case where the US is building the world’s biggest private cloud at the CIA, they've had to change their technology around the needs of the government.

They see it as really the "Fortune 1." They see it as the largest opportunity that’s there, and they're willing to make huge investments in the billions of dollars to capture that market when it arrives.

Gardner: It seems to me, Chris, that we might be facing a situation where we have cloud providers offering a set of services to large government organizations, but perhaps a different set to the private sector. From an interoperability and standardization perspective, that doesn’t make much sense to me.

What’s your perspective on how public cloud services and standardization are shaping up? Where did you expect things to be at this point?

Harding: The government has an additional dimension to that of the private sector when it comes to procurement in terms of the need to be transparent and to be spending the money that’s entrusted to them by the public in a wise manner. One of the issues they have with a lack of standardization is that it makes it more difficult for them to show that they're visibly getting the best deals from the taxpayers when they come to procure cloud services.

Harding
In fact, The Open Group produced a guide to cloud computing for business a couple of years ago. One of the things that we argued in that was that, when procuring cloud services, the enterprise should model the use that it intends to make of the cloud services and therefore be able to understand the costs that they were likely to incur. This is perhaps more important for government, even more than it is for private enterprises. And you're right, the lack of standardization makes it more difficult for them to do this.

Gardner: Chris, do you think that interoperability is of a higher order of demand in public-sector cloud acquisition than in the private sector, or should there be any differentiation?

Need for interoperability

Harding: Both really have the need for interoperability. The public sector perhaps has a greater need, simply because it’s bigger than a small enterprise and it’s therefore more likely to want to use more cloud services in combination.

Gardner: We've certainly seen a lot of open-source platforms emerge in private cloud as well as hybrid cloud. Is that a driving force yet in the way that the public sector is looking at public cloud services acquisition? Is open source a guide to what we should expect in terms of interoperability and standardization in public-cloud services for eGovernment?

Szakal: Open source, from an application implementation point of view, is one of the questions you're asking, but are you also suggesting that somehow these cloud platforms will be reconsidered or implemented via open source? There's truth to both of those statements.

IBM is the number two cloud provider in the federal government space, if you look at hybrid and the commercial cloud for which we provide three major cloud environments. All of those cloud implementations are based on open source -- OpenStack and Cloud Foundry are key pieces of this -- as well as the entire DevOps lifecycle.
So, the economy of APIs and the creation of this composite services are going to be very, very important elements. If they're closed and not open to following the normal RESTful approaches defined by the W3C and other industry consortia, then it’s going to be difficult to create these composite clouds.

So, open source is important, but if you think of open source as a way to ensure interoperability, kind of what we call in The Open Group environment "Executable Standards," it is a way to ensure interoperability.

That’s more important at the cloud-stack level than it is between cloud providers, because between cloud providers you're really going to be talking about API-driven interoperability, and we have that down pretty well.

So, the economy of APIs and the creation of this composite services are going to be very, very important elements. If they're closed and not open to following the normal RESTful approaches defined by the W3C and other industry consortia, then it’s going to be difficult to create these composite clouds.

Gardner: We saw that OpenStack had its origins in a government agency, NASA. In that case, clearly a government organization, at least in the United States, was driving the desire for interoperability and standardization, a common platform approach. Has that been successful, Dave? Why wouldn’t the government continue to try to take that approach of a common, open-source platform for cloud interoperability?

Linthicum: OpenStack has had some fair success, but I wouldn’t call it excellent success. One of the issues is that the government left it dangling out there, and while using some aspects of it, I really expected them to make some more adoption around that open standard, for lots of reasons.

So, they have to hack the operating systems and meet very specific needs around security, governance, compliance, and things like that. They have special use cases, such as the DoD, weapons control systems in real time, and some IoT stuff that the government would like to move into. So, that’s out there as an opportunity.

In other words, the ability to work with some of the distros out there, and there are dozens of them, and get into a special government version of that operating system, which is supported openly by the government integrators and providers, is something they really should take advantage of. It hasn’t happened so far and it’s a bit disappointing.

Insight into Europe

Gardner: Do any of you have any insight into Europe and some of the government agencies there? They haven’t been shy in the past about mandating certain practices when it comes to public contracts for acquisition of IT services. I think cloud should follow the same path. Is there a big difference in what’s going on in Europe and in North America?

Szakal: I just got off the phone a few minutes ago with my counterpart in the UK. The nice thing about the way the UK government is approaching cloud computing is that they're trying to do so by taking the handcuffs off the vendors and making sure that they are standards-based. They're meeting a certain quality of services for them, but they're not mandating through policy and by law the structure of their cloud. So, it allows for us, at least within IBM, to take advantage of this incredible industry ecosystem you have on the commercial side, without having to consider that you might have to lift and shift all of this very expensive infrastructure over to these industry clouds.

The EU is, in similar ways, following a similar practice. Obviously, data sovereignty is really an important element for most governments. So, you see a lot of focus on data sovereignty and data portability, more so than we do around strict requirements in following a particular set of security controls or standards that would lock you in and make it more difficult for you to evolve over a period of time.
Register for
The Open Group Event
Next in Your Region
Gardner: Chris Harding, to Andras’ point about data interoperability, do you see that as a point on the arrow that perhaps other cloud interoperability standards would follow? Is that something that you're focused on more specifically than more general cloud infrastructure services?

Harding: Cloud is a huge spectrum, from the infrastructure services at the bottom,up to the business services, the application services, to software as a service (SaaS), and data interoperability sits on top of that stack.

I'm not sure that we're ready to get real data interoperability yet, but the work that's being done on trying to establish common frameworks for understanding data, for interpreting data, is very important as a basis for gaining interoperability at that level in the future.

We also need to bear in mind that the nature of data is changing. It’s no longer a case that all data comes from a SQL database. There are all sorts of ways in which data is represented, including human forms, such as text and speech, and interpreting those is becoming more possible and more important.

This is the exciting area, where you see the most interesting work on interoperability.

Gardner: Dave Linthicum, one of the things that some of us who have been proponents of cloud for a number of years now have looked to is the opportunity to get something that couldn’t have been done before, a whole greater than the sum of the parts.

It seems to me that if you have a common cloud fabric and the sufficient amount of interoperability for data and/or applications and infrastructure services and that cuts across both the public and the private sector, then this difficulty we've had with health insurance, payer and provider, interoperability and communication, sharing of government services, and data with the private sector, many of the things that have been probably blamed on bureaucracy and technical backwardness in some ways could be solved if there was a common public cloud approach adopted by the major public cloud providers. It seems to me a very significant benefit could be drawn when the public and private sector have a commonality that having your own data centers of the past just couldn't provide.

Am I chewing on too much pie in the sky here, Dave, or is there actually something to be said about the cloud model, not just between government to government agencies, but the public and private sectors?

Getting more savvy

Linthicum: The public-cloud providers out there, the big ones, are getting more savvy about providing interoperability, because they realized that it’s going to be multi-cloud. It’s going to be different private and public cloud instances, different kinds of technologies, that are there, and you have to work and play well with a number of different technologies.

However, to be a little bit more skeptical, over the years, I've found out that they're in it for their own selfish interests, and they should be, because they're corporations. They're going to basically try to play up their technology to get into a market and hold on to the market, and by doing that, they typically operate against interoperability. They want to make it as difficult as possible to integrate with the competitors and leverage their competitors’ services.

So, we have that kind of dynamic going on, and it’s incredibly frustrating, because we can certainly stand up, have the discussion, and reveal the concepts. You just did a really good job in revealing that this has been Nirvana, and we should start moving in this direction. You will typically get lots of head-nodding from the public-cloud providers and the private-cloud providers but actions speak louder than words, and thus far, it’s been very counterproductive.

Interoperability is occurring but it’s in dribs and drabs and nothing holistic.

Gardner: Chris, it seems as if the earlier you try to instill interoperability and standardization both in technical terms, as well as methodological, that you're able to carry that into the future where we don't repave cow paths, but we have highly non-interoperable data centers replaced by them being in the cloud, rather than in some building that you control.
The public-cloud providers out there, the big ones, are getting more savvy about providing interoperability, because they realized that it’s going to be multi-cloud.

What do you think is going to be part of the discussion at The Open Group Paris Event, October 24, around some of these concepts of eGovernment? Shouldn’t they be talking about trying to make interoperability something that's in place from the start, rather than something that has to be imposed later in the process?

Harding: Certainly this will be an important topic at the forthcoming Paris event. My personal view is that the question of when you should standardize something to gain interoperability is a very difficult balancing act. If you do it too late, then you just get a mess of things that don’t interoperate, but equally, if you try to introduce standards before the market is ready for them, you generally end up with something that doesn’t work, and you get a mess for a different reason.

Part of the value of industry events, such as The Open Group events, is for people in different roles in different organizations to be able to discuss with each other and get a feel for the state of maturity and the directions in which it's possible to create a standard that will stick. We're seeing a standard paradigm, the API paradigm, that was mentioned earlier. We need to start building more specific standards on top of those, and certainly in Paris and at future Open Group events, those are the things we'll be discussing.

Gardner: Andras, you wear a couple of different hats. One is the Chief Technology Officer at IBM US Federal, but you're also very much involved with The Open Group. I think you're on the Board of Directors. How do you see this progression of what The Open Group has been able to do in other spheres around standardization and both methodological, such as an enterprise architecture framework, TOGAF®, an Open Group standard,, as well as the implementation enforcement of standards? Is what The Open Group has done in the past something you expect to be applicable to these cloud issues?

Szakal: IBM has a unique history, being one of the only companies in the technology arena. It’s over a 100-years-old and has been able to retain great value to its customers over that long period of time, and we shifted from a fairly closed computing environment to this idea of open interoperability and freedom of choice.

That's our approach for our cloud environment as well. What drives us in this direction is because our customers require it from IBM, and we're a common infrastructure and a glue that binds together many of our enterprise and the largest financial banking and healthcare institutions in the world to ensure that they can interoperate with other vendors.

As such, we were one of the founders of The Open Group, which has been at the forefront of helping facilitate this discussion about open interoperability. I'm totally with Chris as to when you would approach that. As I said before, my concern is that you interoperate at the service level in the economy of APIs. That would suggest that there are some other elements for that, not just the API itself, but the ability to effectively manage credentials, security, or some other common services, like being able to manage object stores to the place that you would like to be able to store your information, so that data sovereignty isn’t an issue. These are all the things that will occur over a period of time.

Early days

It’s early, heady days in the cloud world, and we're going to see all of that goodness come to pass here as we go forward. In reality, we talk about cloud it as if it’s a thing. It’s true value isn't so much in the technology, but in creating these new disruptive business capabilities and business models. Openness of the cloud doesn’t facilitate that creation of those new business models.

That’s where we need to focus. Are we able to actually drive these new collaborative models with our cloud capabilities? You're going to be interoperating with many CSPs not just two, three, or four, especially as you see different factors grow into the cloud. It won’t matter where they operate their cloud services from; it will matter how they actually interoperate at that API level.

Gardner: It certainly seems to me that the interoperability is the killer application of the cloud. It can really foster greater inter-department collaboration and synergy, government to government, state to federal, across the EU, for example as well, and then also to the private sector, where you have healthcare concerns and you've got monetary and banking and finance concerns all very deeply entrenched in both public and private sectors. So, we hope that that’s where the openness leads to.
It won’t matter where they operate their cloud services from; it will matter how they actually interoperate at that API level.

Chris, before we wrap up, it seems to me that there's a precedent that has been set successfully with The Open Group, when it comes to security. We've been able to do some pretty good work over the past several years with cloud security using the adoption of standards around encryption or tokenization, for example. Doesn’t that sort of give us a path to greater interoperability at other levels of cloud services? Is security a harbinger of things to come?

Harding: Security certainly is a key aspect that needs to be incorporated in the standards where we build on the API paradigm. But, some people talk about move to digital transformation, the digital enterprise. So, cloud and other things like IoT, big-data analysis, and so on are all coming together, and a key underpinning requirement for that is platform integration. That's where the Open Platform 3.0™ Forum of The Open Group is centering on the possibilities for platform interoperability to enable digital platform integration. Security is a key aspect of that, but there are other aspects too.

Gardner: I am afraid we will have to leave it there. We've been discussing the latest developments in eGovernment and cloud adoption with a panel of experts. Our focus on these issues comes in conjunction with The Open Group Paris Event and Member Meeting, October 24-27, 2016 in Paris, France, and there is still time to register.

So please check out The Open Group website at www.opengroup.org for more information on that event, and many others coming in the future.

With that, I'd like to thank our guests, Dr. Chris Harding, Director for Interoperability at The Open Group; David Linthicum, Senior Vice President at Cloud Technology Partners, and Andras Szakal, Vice President and Chief Technology Officer at IBM US Federal.
Register for
The Open Group Event
Next in Your Region
And a big thank you as well to The Open Group for sponsoring this discussion, and lastly, thank you to our audience for joining us on this BriefingsDirect panel discussion. This is Dana Gardner; Principal Analyst at Interarbor Solutions, your host and moderator. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: The Open Group.

Transcript of a panel discussion with members of The Open Group on the latest developments in eGovernment and cloud adoption. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Friday, October 14, 2016

How ServiceMaster Develops Applications with a Security-Minded Focus as a DevOps Benefit

Transcript of a discussion on how security technology used in software development leads to DevOps efficiencies with many additional business benefits.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition to the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on technology innovation -- and how it's making an impact on people's lives.

Gardner
Our next security innovation and transformation discussion explores how home-maintenance repair and services provider ServiceMaster develops applications with a security-minded focus as a DevOps benefit.

To share how security technology leads to posture maturity and DevOps efficiencies with many business benefits, we're joined by Jennifer Cole, Chief Information Security Officer and Vice President of IT, Information Security, and Governance for ServiceMaster in Memphis, Tennessee. Welcome, Jennifer.

Jennifer Cole: Thank you.
Learn More About DevOps
Solutions that Unify
Development and Operations
Gardner: We're also here with Ashish Kuthiala, Senior Director of Marketing and Strategy at Hewlett Packard Enterprise DevOps. Welcome, Ashish.

Ashish Kuthiala: Thank you, Dana.

Gardner: Jennifer, tell me, what are some of the top trends that drive your need for security improvements and that also spurred DevOps benefits?

Cole: When we started our DevOps journey, security was a little bit ahead of the curve for application security and we were able to get in on the front end of our DevOps transformation.

Cole

The primary reason for our transformation as a company is that we are an 86-year-old company that has seven brands under one umbrella, and we needed to have one brand, one voice, and be able to talk to our customers in a way that they wanted us to talk to them.

That means enabling IT to get capabilities out there quickly, so that we can interact with our customers "digital first." As a result of that, we were able to see an increase in the way that we looked at security education and process. We were normally doing our penetration tests after the fact of a release. We were able to put tools in place to test prior to a release, and also teach our developers along the way that security is everyone's responsibility.

ServiceMaster has been fortunate that we have a C-suite willing to invest in DevOps and an Agile methodology. We also had developers who were willing to learn, and with the right intent to deliver code that would protect our customers. Those things collided, and we have the perfect storm.

So, we're delivering quicker, but we also fail faster allowing us to go back and fix things quicker. We're seeing an uptick in what we're delivering being a lot more secure.

Gardner: Ashish, it seems obvious, having heard Jennifer describe it, DevOps and security hand-in-hand -- a whole greater than the sum of the parts. Are you seeing this more across various industries?

Stopping defects

Kuthiala: Absolutely. With the adoption of DevOps increasing more across enterprises, security is no different than any other quality-assurance (QA) testing that you do. You can't let a defect reach your customer base; and you cannot let a security flaw reach your customer base as well.

Kuthiala
If you look at it from that perspective, and the teams are willing to work together, you're treated no differently than any other QA process. This boils not just to the vulnerability of your software that you're releasing in the marketplace, but there are so many different regulations and compliance [needs] -- internal, external, your own company policies -- that you have to take a look at. You don't want to go faster and compromise security. So, it's an essential part of DevOps.

Cole: DevOps allows for continuous improvement, too. Security comes at the front of a traditional SDLC process, while in the old days, security came last. We found problems after they were in production or something had been compromised. Now, we're at the beginning of the process and we're actually getting to train the people that are at the beginning of the process on how and why to deliver things that are safe for our customers.

Gardner: Jennifer, why is security so important? Is this about your brand preservation? Is this about privacy and security of data? Is this about the ability for high performance to maintain its role in the organization? All the above? What did I miss? Why is this so important?

Cole: Depending on the lens that you are looking through, that answer may be different. For me, as a CISO, it's making sure that our data is secure and that our customers have trust in us to take care of their information. The rest of the C-suite, I am sure, feels the same, but they're also very focused on transformation to digital-first, making sure customers can work with us in any way that they want to and that their ServiceMaster experience is healthy.

Our leaders also want to ensure our customers return to do business with us and are happy in the process.  Our company helps customers in some of the most difficult times in their life, or helps them prevent a difficult time in the ownership of their home.

But for me and the rest of our leadership team, it's making sure that we're doing what's right. We're training our teams along the way to do what's right, to just make the overall ServiceMaster experience better and safe. As young people move into different companies, we want to make sure they have that foundation of thinking about security first -- and also the customer.

We tend to put IT people in a back room, and they never see the customer. This methodology allows IT to see what they could have released and correct it if it's wrong, and we get an opportunity to train for the future.

Through my lens, it’s about protecting our data and making sure our customers are getting service that doesn't have vulnerabilities in it and is safe.

Gardner: Now, Ashish, user experience is top of mind for organizations, particularly organizations that are customer focused like ServiceMaster. When we look at security and DevOps coming together, we can put in place the requirements to maintain that data, but it also means we can get at more data and use it more strategically, more tactically, for personalization and customization -- and at the same time, making sure that those customers are protected.

How important is user experience and data gathering now when it comes to QA and making applications as robust as they can be?

Million-dollar question

Kuthiala: It's a million-dollar question. I'll give you an example of a client I work with. I happen to use their app very, very frequently, and I happen to know the team that owns that app. They told me about 12 months ago that they had invested -- let’s just make up this number -- $1 million in improving the user experience. They asked me how I liked it. I said, "Your app is good. I only use this 20 percent of the features in your app. I really don’t use the other 80 percent. It's not so useful to me."

That was an eye-opener to them, because the $1 million or so that they would have invested in enriching the user experience -- if they knew exactly what I was doing as a user, what I use, what I did not use, where I had problems -- could have used that toward that 20 percent that I use. They could have made it better than anybody else in the marketplace and also gathered information on what is it that the market wants by monitoring the user experience with people like me.
It's not just the availability and health of the application; it’s the user experience. It's having empathy for the user, as an end user.

It's not just the availability and health of the application; it’s the user experience. It's having empathy for the user, as an end-user. HPE of course, makes a lot of these tools, like HPE AppPulse, which is very specifically designed to capture that mobile user experience and bring it back before you have a flood of calls and support people screaming at you as to why the application isn’t working.

Security is also one of those things. All is good until something goes wrong. You don't want to be in a situation when something has actually gone wrong and your brand is being dragged through mud in the press, your revenue starts to decline, and then you look at it. It’s one of those things that you can't look at after the fact.

Gardner: Jennifer, this strikes me as an under-appreciated force multiplier, that the better you maintain data integrity, security, and privacy, the more trust you are going to get to get more data about your customers that you can then apply back to a better experience for them. Is that something that you are banking on at ServiceMaster?
Learn More About DevOps
Solutions that Unify
Development and Operations
Cole: Absolutely. Trust is important, not only with our customers, but also our employees and leaders. We want people to feel like they're in a healthy environment, where they can give us feedback on that user experience. What I would say to what Ashish was saying is that DevOps actually gives us the ability to deliver what the business wants IT to deliver for our customers.

In the past 25 years, IT has decided what the customer would like to see. In this methodology, you're actually working with your business partners who understand their products and their customers, and they're telling you the features that need to be delivered. Then, you're able to pick the minimum viable product and deliver it first, so that you can capture that 20 percent of functionality.

Also, if you're wrapping security in front of that, that means security is not coming back to you later with the penetration test results and say that you have all of these things to fix, which takes time away from delivering something new for our customers.

This methodology pays off, but the journey is hard. It’s tough because in most companies you have a legacy environment that you have to support. Then, you have this new application environment that you’re creating. There's a healthy balance that you have to find there, and it takes time. But we've seen quicker results and better revenue, our customers are happier, they're enjoying the ServiceMaster experience, instead of our individual brand families, and we've really embraced the methodology.

Gardner: Do you have any examples that you can recall where you've done development projects and you’ve been able to track that data around that particular application? What’s going on with the testing, and then how is that applied back to a DevOps benefit? Maybe you could just walk us through an example of where this has really worked well.

Digital first

Cole: About a year and a half ago, we started with one of our brands, American Home Shield, and looked at where the low hanging fruit -- or minimum viable product -- was in that brand for digital first. Let me describe the business a little bit. Our customers reach out to us, they purchase a policy for their house and we maintain appliances and such in their home, but it is a contractor-based company. We send out a contractor who is not a ServiceMaster associate.

We have to make that work and make our customer feel like they've had a seamless experience with American Home Shield. We had some opportunity in that brand for digital first. We went after it and drastically changed the way that our customers did business with us. Now, it's caught on like wildfire, and we're really trying to focus on one brand and one voice. This is a top-down decision which does help us move faster.

All seven of our brands are home services. We're in 75,000 homes a day and we needed to identify the customers of all the brands, so that we could customize the way that we do business with them. DevOps allows us to move faster into the market and deliver that.

Gardner: Ashish, there aren't that many security vendors that do DevOps, or DevOps vendors that do security. At HPE, how have you made advances in terms of how these two areas come together?
The strengths of HPE in helping its customers lies with the very fact that we have an end-to-end diverse portfolio.

Kuthiala: The strengths of HPE in helping its customers lies with the very fact that we have an end-to-end diverse portfolio. Jennifer talked about taking the security practices and not leaving it toward the end of the cycle, but moving it to the very beginning, which means that you have to get developers to start thinking like security experts and work with the security experts.

Given that we have a portfolio that spans the developers and the security teams, our best practices include building our own customer-facing software products that incorporate security practices, so that when developers are writing code, they can begin to see any immediate security threats as well as whether their code is compliant with any applicable policies or not. Even before code is checked in, the process runs the code through security checks and follows it all the way through the software development lifecycle.

These are security-focused feedback loops. At any point, if there is a problem, the changes are rejected and sent back or feedback is sent back to the developers immediately.

If it makes through the cycle and a known vulnerability is found before release to production, we have tools such as App Defender that can plug in to protect the code in production until developers can fix it, allowing you to go faster but remain protected.

Cole: It blocks it from the customer until you can fix it.

Kuthiala: Jennifer, can you describe a little bit how you use some of these products?

Strategic partnership

Cole: Sure. We’ve had a great strategic partnership with HPE in this particular space. Application security caught on fire about two years ago at RSA, which is one of the main security conferences for anyone in our profession.

The topic of application security has not been focused to CISOs in my opinion. I was fortunate enough that I had a great team member who came back and said that we have to get on board with this. We had some conversations with HPE and ended up in a great strategic partnership. They've really held our hands and helped us get through the process. In turn, that helped make them better, as well as make us better, and that's what a strategic partnership should be about.

Now, we're watching things as they are developed. So, we're teaching the developer in real-time. Then, if something happens to get through, we have App Defender, which will actually contain it until we can fix it before it releases to our customer. If all of those defenses don’t work, we still do the penetration test along with many other controls that are in place. We also try to go back to just grassroots, sit down with the developers, and help them understand why they would want to develop differently next time.
The next step for ServiceMaster specifically is making solid plans to migrate off of our legacy systems, so that we can truly focus on maturing DevOps and delivering for our customer in a safer, quicker way.

Someone from security is in every one of the development scrum meetings and on all the product teams. We also participate in Big Room Planning. We're trying to move out of that overall governing role and into a peer-to-peer type role, helping each other learn, and explaining to them why we want them to do things.

Gardner: It seems to me that, having gone at this at the methodological level with those collaboration issues solved, bringing people into the scrum who are security minded, puts you in a position to be able to scale this. I imagine that more and more applications are going to be of a mobile nature, where there's going to be continuous development. We're also going to start perhaps using micro-services for development and ultimately Internet of Things (IoT) if you start measuring more and more things in your homes with your contractors.

Cole: We reach 75,000 homes a day. So, you can imagine that all of those things are going to play a big part in our future.

Gardner: Before we sign-off, perhaps you have projections as to where you like to see things go. How can DevOps and security work better for you as a tag team?

Cole: For me, the next step for ServiceMaster specifically is making solid plans to migrate off of our legacy systems, so that we can truly focus on maturing DevOps and delivering for our customer in a safer, quicker way, and so we're not always having to balance this legacy environment and this new environment.

If we could accelerate that, I think we will deliver to the customer quicker and also more securely.

Gardner: Ashish, last word, what should people who are on the security side of the house be thinking about DevOps that they might not have appreciated?

Higher quality

Kuthiala: This whole approach of adopting DevOps is to deliver your software faster to your customers with higher quality says it. DevOps is an opportunity for security teams to get deeply embedded in the mindset of the developers, the business planners, testers, production teams – essentially the whole software development lifecycle, which earlier they didn’t have the opportunity to do.

They would usually come in before code went to production and often would push back the production cycles by a few weeks because they had to do the right thing and ensure release of code that was secure. Now, they’re able to collaborate with and educate developers, sit down with them, tell them exactly what they need to design and therefore deliver secure code right from the design stage. It’s the opportunity to make this a lot better and more secure for their customers.

Cole: The key is security being a strategic partner with the business and the rest of IT, instead of just being a governing body.

Gardner: I'm afraid we'll have to leave it there. We've been discussing how home-maintenance repair and services provider ServiceMaster develops applications with a security-minded focus and a DevOps benefit. And we've seen how security technology leads to a posture maturity and DevOps efficiencies with many additional business benefits.
Learn More About DevOps
Solutions that Unify
Development and Operations
So join me in thanking our guests, Jennifer Cole, CISO and Vice President of IT, Information Security, and Governance for ServiceMaster in Memphis, Tennessee, and Ashish Kuthiala, Senior Director of Marketing and Strategy at Hewlett Packard Enterprise DevOps.

And I'd like to also thank our audience for joining us for this Hewlett Packard Enterprise Voice of the Customer security transformation discussion. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and please come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how security technology used in software development leads to DevOps efficiencies with many additional business benefits.  Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in: