Monday, November 07, 2016

Swift and Massive Data Classification Advances Score a Win for Better Securing Sensitive Information

Transcript of a discussion on how cybersecurity attacks are on the rise but new data capabilities bring intelligence to the edge to stifle data loss risk.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on business digital transformation. Stay with us now to learn how agile companies are fending off disruption -- in favor of innovation.

Gardner
Our next case study explores how -- in an era when cybersecurity attacks are on the rise and enterprises and governments are increasingly vulnerable -- new data intelligence capabilities are being brought to the edge to provide better data loss prevention (DLP).

We'll learn how Digital Guardian in Waltham, Massachusetts analyzes both structured and unstructured data to predict and prevent loss of data and intellectual property (IP) with increased accuracy.

To learn how data recognition technology supports network and endpoint forensic insights for enhanced security and control, we're joined by Marcus Brown, Vice President of Corporate Business Development for Digital Guardian.
Learn More About HPE IDOL
Advanced Enterprise Search and Analytics
For Unstructured Data
Welcome, Marcus.

Marcus Brown: Hi, Dana. Great to be here.

Gardner: Set the stage for us. What are some of the major trends making DLP even more important, even more effective?

Brown: Data protection has very much to come to the forefront in the last couple of years. Unfortunately, we wake up every morning and read in the newspapers, see on television, and hear on the radio a lot about data breaches. It’s pretty much every type of company, every type of organization, government organizations, etc., that’s being hit by this phenomenon at the moment.

Brown

So, awareness is very high, and apart from the frequency, a couple of key points are changing. First of all, you have a lot of very skilled adversaries coming into this, criminals, nation-state actors, hactivists, and many others. All these people are well-trained and very well resourced to come after your data. That means that companies have a pretty big challenge in front of them. The threat has never been bigger.

In terms of data protection, there are a couple of key trends at the cyber-security level. People have been aware of the so-called insider threat for a long time. This could be a disgruntled employee or it could be someone who has been recruited for monetary gain to help some organization get to your data. That’s a difficult one, because the insider has all the privilege and the visibility and knows where the data is. So, that’s not a good thing.

Then, you have employees, well-meaning employees, who just make mistakes. It happens to all of us. We touch something in Outlook, and we have a different email address than the one we were intending, and it goes out. The well-meaning employees, as well, are part of the insider threat.

Outside threats

What’s really escalated over the last couple of years are the advanced external attackers or the outside threat, as we call it. These are well-resourced, well-trained people from nation-states or criminal organizations trying to break in from the outside. They do that with malware or phishing campaigns.

About 70 percent of the attacks stop with the phishing campaign, when someone clicks on something that looked normal. Then, there's just general hacking, a lot of people getting in without malware at all. They just hack straight in using different techniques that don’t rely on malware.

People have become so good at developing malware and targeting malware at particular organizations, at particular types of data, that a lot of tools like antivirus and intrusion prevention just don’t work very well. The success rate is very low. So, there are new technologies that are better at detecting stuff at the perimeter and on the endpoint, but it’s a tough time.

There are internal and external attackers. A lot of people outside are ultimately after the two main types of data that companies have. One is a customer data, which is credit card numbers, healthcare information, and all that stuff. All of this can be sold on the black market per record for so-and-so many dollars. It’s a billion-dollar business. People are very motivated to do this.

Most companies don’t want to lose their customers’ data. That’s seen as a pretty bad thing, a bad breach of trust, and people don’t like that. Then, obviously, for any company that has a product where you have IP, you spent lots of money developing that, whether it’s the new model of a car or some piece of electronics. It could be a movie, some new clothing, or whatever. It’s something that you have developed and it’s a secret IP. You don’t want that to get out, as well as all of your other internal information, whether it’s your financials, your plans, or your pricing. There are a lot of people going after both of those things, and that’s really the challenge.

In general, the world has become more mobile and spread out. There is no more perimeter to stop people from getting in. Everyone is everywhere, private life and work life is mixed, and you can access anything from anywhere. It’s a pretty big challenge.

Gardner: Even though there are so many different types of threats, internal, external, and so forth, one of the common things that we can do nowadays is get data to learn more about what we have as part of our inventory of important assets.

While we might not be able to seal off that perimeter, maybe we can limit the damage that takes place by early detection of problems. The earlier that an organization can detect that something is going on that shouldn’t be, the quicker they can come to the rescue. Let’s look at how the instant analysis of data plays a role in limiting negative outcomes.

Can't protect everything

Brown: If you want to protect something, you have to know it’s sensitive and that you want to protect it. You can’t protect everything. You're going to find which data is sensitive, and we're able to do that on-the-fly to recognize sensitive data and nonsensitive data. That’s a key part of the DLP puzzle, the data protection puzzle.

We work for some pretty large organizations, some of the largest companies and government organizations in the world, as well as lot of medium- and smaller-sized customers. Whatever it is we're trying to protect, personal information or indeed the IP, we need to be in the right place to see what people are doing with that data.

Our solution consists of two main types of agents. Some agents are on endpoint computers, which could be desktops or servers, Windows, Linux, and Macintosh. It’s a good place to be on the endpoint computer, because that’s where people, particularly the insider, come into play and start doing something with data. That’s where people work. That’s how they come into the network and it’s how they handle a business process.

So the challenge in DLP is to support the business process. Let people do with data what they need to do, but don’t let that data get out. The way to do that is to be in the right place. I already mentioned the endpoint agent, but we also have network agents, sensors, and appliances in the network that can look at data moving around.

The endpoint is really in the middle of the business process. Someone is working, they're working with different applications, getting data out of those applications, and they're doing whatever they need to do in their daily work. That’s where we sit, right in the middle of that, and we can see who the user is and what application they're working with it. It could be an engineer working with the computer-aided design (CAD) or the product lifecycle management (PLM) system developing some new automobile or whatever, and that’s a great place to be.

We rely very heavily on the HPE IDOL technology for helping us classify data. We use it particularly for structured data, anything like a credit card number, or alphanumeric data. It could be also free text about healthcare, patient information, and all this sort of stuff.

We use IDOL to help us scan documents. We can recognize regular expressions, that’s a credit card number type of thing, or Social Security. We can also recognize terminology. We rely on the fact that IDOL supports hundreds of languages and many different subject areas. So, using IDOL, we're able to recognize a whole lot of anything that’s written in textual language.

Our endpoint agent also has some of its own intelligence built in that we put on top of what we call contextual recognition or contextual classification. As I said, we see the customer list coming out of Salesforce.com or we see the jet fighter design coming out of the PLM system and we then tag that as well. We're using IDOL, we're using some of our technology, and we're using our vantage point on the endpoint being in the business process to figure out what the data is.

We call that data-in-use monitoring and, once we see something is sensitive, we put a tag on it, and that tag travels with the data no matter where it goes.

An interesting thing is that if you have someone making a mistake, an unintentional, good-willed employee, accidentally attaching the wrong doc to something that it goes out, obviously it will warn the user of that.

We can stop that

If you have someone who is very, very malicious and is trying to obfuscate what they're doing, we can see that as well. For example, taking a screenshot of some top-secret diagram, embedding that in a PowerPoint and then encrypting the PowerPoint, we're tagging those docs. Anything that results from IP or top-secret information, we keep tagging that. When the guy then goes to put it on a thumb drive, put it on Dropbox, or whatever, we see that and stop that.

So that’s still a part of the problem, but the two points are classify it, that’s what we rely on IDOL a lot for, and then stop it from going out, that’s what our agent is responsible for.

Gardner: Let’s talk a little bit about the results here, when behaviors, people and the organization are brought to bear together with technology, because it’s people, process and technology. When it becomes known in the organization that you can do this, I should think that that must be a fairly important step. How do we measure effectiveness when you start using a technology like Digital Guardian? Where does that become explained and known in the organization and what impact does that have?

Brown: Our whole approach is a risk-based approach and it’s based on visibility. You’ve got to be able to see the problem and then you can take steps and exercise control to stop the problems.
Learn More About HPE IDOL
Advanced Enterprise Search and Analytics
For Unstructured Data
When you deploy our solution, you immediately gain a lot of visibility. I mentioned the endpoints and I mentioned the network. Basically, you get a snapshot without deploying any rules or configuring in any complex way. You just turn this on and you suddenly get this rich visibility, which is manifested in reports, trends, and all this stuff. What you get, after a very short period of time, is a set of reports that tell you what your risks are, and some of those risks may be that your HR information is being put on Dropbox.

You have engineers putting the source code onto thumb drives. It could all be well-meaning, they want to work on it at home or whatever, or it could be some bad guy.

One the biggest points of risk in any company is when an employee resigns and decides to move on. A lot of our customers use the monitoring and the reporting we have at that time to actually sit down with the employee and say, "We noticed that you downloaded 2,000 files and put them on a thumb drive. We’d like you to sign this saying that you're going to give us that data back."

That’s a typical use case, and that’s the visibility you get. You turn it on and you suddenly see all these risks, hopefully, not too many, but a certain number of risks and then you decide what you're going to do about it. In some areas you might want to be very draconian and say, "I'm not going to allow this. I'm going to completely block this. There is no reason why you should put the jet fighter design up on Dropbox."

Gardner: That’s where the epoxy in the USB drives comes in.

Warning people

Brown: Pretty much. On the other hand, you don’t want to stop people using USB, because it’s about their productivity, etc. So, you might want to warn people, if you're putting some financial data on to a thumb drive, we're going to encrypt that so nothing can happen to it, but do you really want to do this? Is this approach appropriate? People get a feeling that they're being monitored and that the way they are acting maybe isn't according to company policy. So, they'll back out of it.

In a nutshell, you look at the status quo, you put some controls in place, and after those controls are in place, within the space of a week, you suddenly see the risk posture changing, getting better, and the incidence of these dangerous actions dropping dramatically.

Very quickly, you can measure the security return on investment (ROI) in terms of people’s behavior and what’s happening. Our customers use that a lot internally to justify what they're doing.

Generally, you can get rid of a very large amount of the risk, say 90 percent, with an initial pass, or initial first two passes of rules to say, we don’t want this, we don’t want that. Then, you're monitoring the status, and suddenly, new things will happen. People discover new ways of doing things, and then you’ve got to put some controls in place, but you're pretty quickly up into the 90 percent and then you fine-tuning to get those last little bits of risk out.

Gardner: Because organizations are becoming increasingly data-driven, they're getting information and insight across their systems and their applications. Now, you're providing them with another data set that they could use. Is there some way that organizations are beginning to assimilate and analyze multiple data sets including what Digital Guardian’s agents are providing them in order to have even better analytics on what’s going on or how to prevent unpleasant activities?

Brown: In this security world, you have the security operations center (SOC), which is kind of the nerve center where everything to do with security comes into play. The main piece of technology in that area is the security information and event management (SIEM) technology. The market leader is HPE’s ArcSight, and that’s really where all of the many tools that security organizations use come together in one console, where all of that information can be looked at in a central place and can also be correlated.

We provide a lot of really interesting information for the SIEM for the SOC. I already mentioned we're on the endpoint and the network, particularly on the endpoint. That’s a bit of a blind spot for a lot of security organizations. They're traditionally looking at firewalls, other network devices, and this kind of stuff.

We provide rich information about the user, about the data, what’s going on with the data, and what’s going on with the system on the endpoint. That’s key for detecting malware, etc. We have all this rich visibility on the endpoint and also from the network. We actually pre-correlate that. We have our own correlation rules. On the endpoint computer in real time, we're correlating stuff. All of that gets populated into ArcSight.

At the HPE Protect Show in National Harbor in September we showed the latest generation of our integration, which we're very excited about. We have a lot of ArcSight content, which helps people in the SOC leverage our data, and we gave a couple of presentations at the show on that.

Gardner: And is there a way to make this even more protected? I believe encryption could be brought to bear and it plays a role in how the SIEM can react and behave.

Seamless experience

Brown: We actually have a new partnership, related to HPE's acquisition of Voltage, which is a real leader in the e-mail security space. It’s all about applying encryption to messages and managing the keys and making that user experience very seamless and easy to use.

Adding to that, we're bundling up some of the classification functionality that we have in our network sensors. What we have is a combination between Digital Guardian Network, DOP, and the HPE Data Security Encryption solution, where an enterprise can define a whole bunch of rules based on templates.

We can say, "I need to comply with HIPAA," "I need to comply with PCI," or whatever standard it is. Digital Guardian on the network will automatically scan all the e-mail going out and automatically classify according to our rules which e-mails are sensitive and which attachments are sensitive. It then goes on to the HPE Data Security Solution where it gets encrypted automatically and then sent out.

It’s basically allowing corporations to apply standard set of policies, not relying on the user to say they need to encrypt this, not leaving it to the user’s judgment, but actually applying standard policies across the enterprise for all e-mail making sure they get encrypted. We are very excited about it.

Gardner: That sounds key -- using encryption to the best of its potential, being smart about it, not just across the waterfront, and then not depending on a voluntary encryption, but doing it based on need and intelligence.

Brown: Exactly.

Gardner: For those organizations that are increasingly trying to be data-driven, intelligent, taking advantage of the technologies and doing analysis in new interesting ways, what advice might you offer in the realm of security? Clearly, we’ve heard at various conferences and other places that security is, in a sense, the killer application of big-data analytics. If you're an organization seeking to be more data-driven, how can you best use that to improve your security posture?

Brown: The key, as far as we’re concerned, is that you have to watch your data, you have to understand your data, you need to collect information, and you need visibility of your data.

The other key point is that the security market has been shifting pretty dramatically from more of a network view much more toward the endpoint. I mentioned earlier that antivirus and some of these standard technologies on the endpoint aren't really cutting it anymore. So, it’s very important that you get visibility down at the endpoint and you need to see what users are doing, you need to understand what your systems are running, and you need to understand where your data is.

So collect that, get that visibility, and then leverage that visibility with analytics and tools so that you can profit from an automated kind of intelligence.
Learn More About HPE IDOL
Advanced Enterprise Search and Analytics
For Unstructured Data
Gardner: I'm afraid we will have to leave it there. We’ve been exploring how cybersecurity attacks are on the rise but new capabilities are being brought to the edge to provide for better DLP. And we’ve learned how Digital Guardian uses HPE’s IDOL to analyze structured data and predict and prevent loss of information intellectual property with increased accuracy.

So please join me in thanking Marcus Brown, Vice President of Corporate Business Development for Digital Guardian in Waltham, Massachusetts.

Brown: Thank you.

Gardner: And a big thank you as well to our audience for joining us for this Hewlett Packard Enterprise Voice of the Customer digital transformation discussion.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored interviews. Thanks again for listening, and please come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how cybersecurity attacks are on the rise but new data capabilities bring intelligence to the edge to stifle data loss risk. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Tuesday, November 01, 2016

2016 Campaigners Look to Deep Big Data Analysis and Querying to Gain an Edge in Reaching Voters

Transcript of a discussion on how data analysis services startup BlueLabs in Washington helps presidential campaigns better know and engage with potential voters.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Welcome to the next edition of the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on business digital transformation. Stay with us now to learn how agile companies are fending off disruption in favor of innovation.

Gardner
Our next case study explores how data-analysis services startup BlueLabs in Washington, D.C. helps presidential campaigns better know and engage with potential voters.

We'll learn how BlueLabs relies on analytics platforms that allow a democratization of querying, of opening the value of vast big data resources to more of those in the need to know.

In this example of helping organizations work smarter by leveraging innovative statistical methods and technology, we'll discover how specific types of voters can be identified and reached.

Here to describe how big data is being used creatively by contemporary political organizations for two-way voter engagement, we're joined by Erek Dyskant Co-Founder and Vice President of Impact at BlueLabs Analytics in Washington. Welcome, Erek.
Join myVertica
To Get the Free
HPE Vertica Community Edition
Erek Dyskant: I'm so happy to be here, thanks for having me.

Gardner: Obviously, this is a busy season for the analytics people who are focused on politics and campaigns. What are some of the trends that are different in 2016 from just four years ago. It’s a fast-changing technology set, it's also a fast-changing methodology. And of course, the trends about how voters think, react, use social, and engage are also dynamic. So what's different this cycle?

Dyskant: From a voter-engagement perspective, in 2012, we could reach most of our voters online through a relatively small set of social media channels -- Facebook, Twitter, and a little bit on the Instagram side. Moving into 2016, we see a fragmentation of the online and offline media consumption landscape and many more folks moving toward purpose-built social media platforms.

If I'm at the HPE Conference and I want my colleagues back in D.C. to see what I'm seeing, then maybe I'll use Periscope, maybe Facebook Live, but probably Periscope. If I see something that I think one of my friends will think is really funny, I'll send that to them on Snapchat.

Where political campaigns have traditionally broadcast messages out through the news-feed style social-media strategies, now we need to consider how it is that one-to-one social media is acting as a force multiplier for our events and for the ideas of our candidates, filtered through our campaign’s champions.

Gardner: So, perhaps a way to look at that is that you're no longer focused on precincts physically and you're no longer able to use broadcast through social media. It’s much more of an influence within communities and identifying those communities in a new way through these apps, perhaps more than platforms.

Social media

Dyskant: That's exactly right. Campaigns have always organized voters at the door and on the phone. Now, we think of one more way. If you want to be a champion for a candidate, you can be a champion by knocking on doors for us, by making phone calls, or by making phone calls through online platforms.

You can also use one-to-one social media channels to let your friends know why the election matters so much to you and why they should turn out and vote, or vote for the issues that really matter to you.

Gardner: So, we're talking about retail campaigning, but it's a bit more virtual. What’s interesting though is that you can get a lot more data through the interaction than you might if you were physically knocking on someone's door.

Dyskant: The data is different. We're starting to see a shift from demographic targeting. In 2000, we were targeting on precincts. A little bit later, we were targeting on combinations of demographics, on soccer moms, on single women, on single men, on rural, urban, or suburban communities separately.

Dyskant
Moving to 2012, we've looked at everything that we knew about a person and built individual-level predictive models, so that we knew each person's individual set of characteristics made that person more or less likely to be someone that our candidate would have an engaging conversation through a volunteer.

Now, what we're starting to see is behavioral characteristics trumping demographic or even consumer data. You can put whiskey drinkers in your model, you can put cat owners in your model, but isn't it a lot more interesting to put in your model that fact that this person has an online profile on our website and this is their clickstream? Isn't it much more interesting to put into a model that this person is likely to consume media via TV, is likely to be a cord-cutter, is likely to be a social media trendsetter, is likely to view multiple channels, or to use both Facebook and media on TV?

That lets us have a really broad reach or really broad set of interested voters, rather than just creating an echo chamber where we're talking to the same voters across different platforms.

Gardner: So, over time, the analytics tools have gone from semi-blunt instruments to much more precise, and you're also able to better target what you think would be the right voter for you to get the right message out to.

One of the things you mentioned that struck me is the word "predictive." I suppose I think of campaigning as looking to influence people, and that polling then tries to predict what will happen as a result. Is there somewhat less daylight between these two than I am thinking, that being predictive and campaigning are much more closely associated, and how would that work?

Predictive modeling

Dyskant: When I think of predictive modeling, what I think of is predicting something that the campaign doesn't know. That may be something that will happen in the future or it may be something that already exists today, but that we don't have an observation for it.

In the case of the role of polling, what I really see about that is understanding what issues matter the most to voters and how it is that we can craft messages that resonate with those issues. When I think of predictive analytics, I think of how is it that we allocate our resources to persuade and activate voters.

Over the course of elections, what we've seen is an exponential trajectory of the amount of data that is considered by predictive models. Even more important than that is an exponential set of the use cases of models. Today, we see every time a predictive model is used, it’s used in a million and one ways, whereas in 2012 it might have been used in 50, 20, or 100 sessions about each voter contract.

Gardner: It’s a fascinating use case to see how analytics and data can be brought to bear on the democratic process and to help you get messages out, probably in a way that's better received by the voter or the prospective voter, like in a retail or commercial environment. You don’t want to hear things that aren’t relevant to you, and when people do make an effort to provide you with information that's useful or that helps you make a decision, you benefit and you respect and even admire and enjoy it.

Dyskant: What I really want is for the voter experience to be as transparent and easy as possible, that campaigns reach out to me around the same time that I'm seeking information about who I'm going to vote for in November. I know who I'm voting for in 2016, but in some local actions, I may not have made that decision yet. So, I want a steady stream of information to be reaching voters, as they're in those key decision points, with messaging that really is relevant to their lives.
I want a steady stream of information to be reaching voters, as they're in those key decision points, with messaging that really is relevant to their lives.

I also want to listen to what voters tell me. If a voter has a conversation with a volunteer at the door, that should inform future communications. If somebody has told me that they're definitely voting for the candidate, then the next conversation should be different from someone who says, "I work in energy. I really want to know more about the Secretary’s energy policies."

Gardner: Just as if a salesperson is engaging with process, they use customer relationship management (CRM), and that data is captured, analyzed, and shared. That becomes a much better process for both the buyer and the seller. It's the same thing in a campaign, right? The better information you have, the more likely you're going to be able to serve that user, that voter.

Dyskant: There definitely are parallels to marketing, and that’s how we at BlueLabs decided to found the company and work across industries. We work with Fortune 100 retail organizations that are interested in how, once someone buys one item, we can bring them back into the store to buy the follow-on item or maybe to buy the follow-on item through that same store’s online portal. How it is that we can provide relevant messaging as users engage in complex processes online? All those things are driven from our lessons in politics.

Politics is fundamentally different from retail, though. It's a civic decision, rather than an individual-level decision. I always want to be mindful that I have a duty to voters to provide extremely relevant information to them, so that they can be engaged in the civic decision that they need to make.

Gardner: Suffice it to say that good quality comparison shopping is still good quality comparison decision-making.

Dyskant: Yes, I would agree with you.

Relevant and speedy

Gardner: Now that we've established how really relevant, important, and powerful this type of analysis can be in the context of the 2016 campaign, I'd like to learn more about how you go about getting that analysis and making it relevant and speedy across large variety of data sets and content sets. But first, let’s hear more about BlueLabs. Tell me about your company, how it started, why you started it, maybe a bit about yourself as well.

Dyskant: Of the four of us who started BlueLabs, some of us met in the 2008 elections and some of us met during the 2010 midterms working at the Democratic National Committee (DNC). Throughout that pre-2012 experience, we had the opportunity as practitioners to try a lot of things, sometimes just once or twice, sometimes things that we operationalized within those cycles.

Jumping forward to 2012 we had the opportunity to scale all that research and development to say that we did this one thing that was a different way of building models, and it worked for in this congressional array. We decided to make this three people’s full-time jobs and scale that up.

Moving past 2012, we got to build potentially one of the fastest-growing startups, one of the most data-driven organizations, and we knew that we built a special team. We wanted to continue working together with ourselves and the folks who we worked with and who made all this possible. We also wanted to apply the same types of techniques to other areas of social impact and other areas of commerce. This individual-level approach to identifying conversations is something that we found unique in the marketplace. We wanted to expand on that.
Join myVertica
To Get the Free
HPE Vertica Community Edition
Increasingly, what we're working on is this segmentation-of-media problem. It's this idea that some people watch only TV, and you can't ignore a TV. It has lots of eyeballs. Some people watch only digital and some people consume a mix of media. How is it that you can build media plans that are aware of people's cross-channel media preferences and reach the right audience with their preferred means of communications?

Gardner: That’s fascinating. You start with the rigors of the demands of a political campaign, but then you can apply in so many ways, answering the types of questions anticipating the type of questions that more verticals, more sectors, and charitable organizations would want to be involved with. That’s very cool.

Let’s go back to the data science. You have this vast pool of data. You have a snappy analytics platform to work with. But, one of the things that I am interested in is how you get more people whether it's in your organization or a campaign, like the Hillary Clinton campaign, or the DNC to then be able to utilize that data to get to these inferences, get to these insights that you want.

What is it that you look for and what is it that you've been able to do in that form of getting more people able to query and utilize the data?

Dyskant: Data science happens when individuals have direct access to ask complex questions of a large, gnarly, but well-integrated data set. If I have 30 terabytes of data across online contacts, off-line contacts, and maybe a sample of clickstream data, and I want to ask things like of all the people who went to my online platform and clicked the password reset because they couldn't remember their password, then never followed up with an e-mail, how many of them showed up at a retail location within the next five days? They tried to engage online, and it didn't work out for them. I want to know whether we're losing them or are they showing up in person.

That type of question maybe would make it into a business-intelligence (BI) report a few months from that, but people who are thinking about what we do every day, would say, "I wonder about this, turn it into a query, and say, "I think I found something." If we give these customers phone calls, maybe we can reset their passwords over the phone and reengage them.

Human intensive

That's just one tiny, micro example, which is why data science is truly a human-intensive exercise. You get 50-100 people working at an enterprise solving problems like that and what you ultimately get is a positive feedback loop of self-correcting systems. Every time there's a problem, somebody is thinking about how that problem is represented in the data. How do I quantify that. If it’s significant enough, then how is it that the organization can improve in this one specific area?

All that can be done with business logic is the interesting piece. You need very granular data that's accessible via query and you need reasonably fast query time, because you can’t ask questions like that when you're going to get coffee every time you run a query.

Layering predictive modeling allows you to understand the opportunity for impact if you fix that problem. That one hypothesis with those users who cannot reset their passwords is that maybe those users aren't that engaged in the first place. You fix their password but it doesn’t move the needle.

The other hypothesis is that it's people who are actively trying to engage with your server and are unsuccessful because of this one very specific barrier. If you have a model of user engagement at an individual level, you can say that these are really high-value users that are having this problem, or maybe they aren’t. So you take data science, align it with really smart individual-level business analysis, and what you get is an organization that continues to improve without having to have at an executive-decision level for each one of those things.

Gardner: So a great deal of inquiry experimentation, iterative improvement, and feedback loops can all come together very powerfully. I'm all for the data scientist full-employment movement, but we need to do more than have people have to go through data scientist to use, access, and develop these feedback insights. What is it about the SQL, natural language, or APIs? What is it that you like to see that allows for more people to be able to directly relate and engage with these powerful data sets?
It's taking that hypothesis that’s driven from personal stories, and being able to, through a relatively simple query, translate that into a database query, and find out if that hypothesis proves true at scale.

Dyskant: One of the things is the product management of data schemas. So whenever we build an analytics database for a large-scale organization I think a lot about an analyst who is 22, knows VLOOKUP, took some statistics classes in college, and has some personal stories about the industry that they're working in. They know, "My grandmother isn't a native English speaker, and this is how she would use this website."

So it's taking that hypothesis that’s driven from personal stories, and being able to, through a relatively simple query, translate that into a database query, and find out if that hypothesis proves true at scale.

Then, potentially take the result of that query, dump them into a statistical-analysis language, or use database analytics to answer that in a more robust way. What that means is that our schemas favor very wide schemas, because I want someone to be able to write a three-line SQL statement, no joins, that enters a business question that I wouldn't have thought to put in a report. So that’s the first line -- is analyst-friendly schemas that are accessed via SQL.

The next line is deep key performance indicators (KPIs). Once we step out of the analytics database, consumers drop into the wider organization that’s consuming data at a different level. I always want reporting to report on opportunity for impact, to report on whether we're reaching our most valuable customers, not how many customers are we reaching.

"Are we reaching our most valuable customers" is much more easily addressable; you just talk to different people. Whereas, when you ask, "Are we reaching enough customers," I don’t know how find out. I can go over to the sales team and yell at them to work harder, but ultimately, I want our reporting to facilitate smarter working, which means incorporating model scores and predictive analytics into our KPIs.

Getting to the core

Gardner: Let’s step back from the edge, where we engage the analysts, to the core, where we need to provide the ability for them to do what they want and which gets them those great results.

It seems to me that when you're dealing in a campaign cycle that is very spiky, you have a short period of time where there's a need for a tremendous amount of data, but that could quickly go down between cycles of an election, or in a retail environment, be very intensive leading up to a holiday season.

Do you therefore take advantage of the cloud models for your analytics that make a fit-for-purpose approach to data and analytics pay as you go? Tell us a little bit about your strategy for the data and the analytics engine.

Dyskant: All of our customers have a cyclical nature to them. I think that almost every business is cyclical, just some more than others. Horizontal scaling is incredibly important to us. It would be very difficult for us to do what we do without using a cloud model such as Amazon Web Services (AWS).

Also, one of the things that works well for us with HPE Vertica is the licensing model where we can add additional performance with only the cost of hardware or hardware provision through the cloud. That allows us to scale up our cost areas during the busy season. We'll sometimes even scale them back down during slower periods so that we can have those 150 analysts asking their own questions about the areas of the program that they're responsible for during busy cycles, and then during less busy cycles, scale down the footprint of the operation.
I do everything I can to avoid aggregation. I want my analysts to be looking at the data at the interaction-by-interaction level.

Gardner: Is there anything else about the HPE Vertica OnDemand platform that benefits your particular need for analysis? I'm thinking about the scale and the rows. You must have so many variables when it comes to a retail situation, a commercial situation, where you're trying to really understand that consumer?

Dyskant: I do everything I can to avoid aggregation. I want my analysts to be looking at the data at the interaction-by-interaction level. If it’s a website, I want them to be looking at clickstream data. If it's a retail organization, I want them to be looking at point-of-sale data. In order to do that, we build data sets that are very frequently in the billions of rows. They're also very frequently incredibly wide, because we don't just want to know every transaction with this dollar amount. We want to know things like what the variables were, and where that store was located.

Getting back to the idea that we want our queries to be dead-simple, that means that we very frequently append additional columns on to our transaction tables. We’re okay that the table is big, because in a columnar model, we can pick out just the columns that we want for that particular query.

Then, moving into some of the in-database machine-learning algorithms allows us to perform more higher-order computation within the database and have less data shipping.

Gardner: We're almost out of time, but I wanted to do some predictive analysis ourselves. Thinking about the next election cycle, midterms, only two years away, what might change between now and then? We hear so much about machine learning, bots, and advanced algorithms. How do you predict, Erek, the way that big data will come to bear on the next election cycle?

Behavioral targeting

Dyskant: I think that a big piece of the next election will be around moving even more away from demographic targeting, toward even more behavioral targeting. How is it that we reach every voter based on what they're telling us about them and what matters to them, how that matters to them? That will increasingly drive our models.

To do that involves probably another 10X scale in the data, because that type of data is generally at the clickstream level, generally at the interaction-by-interaction level, incorporating things like Twitter feeds, which adds an additional level of complexity and laying in computational necessity to the data.

Gardner: It almost sounds like you're shooting for sentiment analysis on an issue-by-issue basis, a very complex undertaking, but it could be very powerful.

Dyskant: I think that it's heading in that direction, yes.

Gardner: I am afraid we'll have to leave it there. We've been exploring how data analysis services startup BlueLabs in Washington, DC helps presidential campaigns better know and engage with potential voters. And we've learned how organizations are working smarter by leveraging innovative statistical methods and technologies, and in this case, looking at two-way voter engagement in entirely new ways -- in this and in future election cycles.
Join myVertica
To Get the Free
HPE Vertica Community Edition
So, please join me in thanking our guest, Erek Dyskant, Co-Founder and Vice President of Impact at BlueLabs in Washington. Thank you, Erek.

Dyskant: Thank you.

Gardner: And a big thank you as well to our audience for joining us for this Hewlett Packard Enterprise Voice of the Customer digital transformation discussion.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored interviews. Thanks again for listening, and please come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how data analysis services startup BlueLabs in Washington helps presidential campaigns better know and engage with potential voters. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Tuesday, October 18, 2016

How Governments Gain Economic Benefits from Inter-Public Cloud Interoperability and Standardization

Transcript of a panel discussion with members of The Open Group on the latest developments in eGovernment and cloud adoption.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hello, and welcome to a special BriefingsDirect Thought Leadership Panel Discussion coming to you in conjunction with The Open Group Paris Event and Member Meeting October 24 through 27, 2016 in France.

Gardner
Given that the Paris event has a focus on the latest developments in eGovernment, our panel will now explore how public-sector organizations can gain economic benefits from cloud interoperability and standardization.

As government agencies move to the public cloud computing model, the use of more than one public cloud provider can offer economic benefits by a competition and choice. But are the public clouds standardized efficiently for true interoperability, and can the large government contracts in the offing for cloud providers have an impact on the level of maturity around standardization?

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host and moderator as we examine how to best procure multiple cloud services as eGovernment services at low risk and high reward.

With that, please join me now in welcoming our panel, Dr. Chris Harding, Director for Interoperability at The Open Group. Welcome, Chris.
Register for
The Open Group Event
Next in Your Region
Harding: Thank you, Dana. It's great to be in this podcast.

Gardner: We're here also with Dave Linthicum, Senior Vice President at Cloud Technology Partners. Welcome, Dave.

Linthicum: Thank you very much, Dana.

Gardner: And lastly, we're here with Andras Szakal, Vice President and Chief Technology Officer at IBM U.S. Federal. Welcome, Andras.

Szakal: Thank you for having me.

Gardner: Andras, let's start with you. I've spoken to some people in the lead-up to this discussion about the level of government-sector adoption of cloud services, especially public cloud. They tell me that it’s lagging the private sector. Is that what you're encountering, that the public sector is lagging the private sector, or is it more complicated than that?

Szakal
Szakal: It's a bit more complicated than that. The public sector born-on-the-cloud adoption is probably much greater than the public sector and it differentiates. So the industry at large, from a born-on-the-cloud point of view is very much ahead of the public-sector government implementation of born-on-the-cloud applications.

What really drove that was innovations like the Internet of Things (IoT), gaming systems, and platforms, whereas the government environment really was more about taking existing government citizens to government shared services and so on and so forth and putting them into the cloud environment.

When you're talking about public cloud, you have to be very specific about the public sector and government, because most governments have their own industry instance of their cloud. In the federal government space, they're acutely aware of the FedRAMP certified public-cloud environments. That can go from moderate risk, where you can have access to the yummy goodness of the entire cloud industry, but then, to FedRAMP High, which would isolate these clouds into their own environments in order to increase the level of protection and lower the risk to the government.

So, the cloud service provider (CSP) created instances of these commercial clouds fit-for-purpose for the federal government. In that case, if we're talking about enterprise applications shifting to the cloud, we're seeing the public sector government side, at the national level, move very rapidly, compared to some of the commercial enterprises who are more leery about what the implications of that movement may be over a period of time. There isn't anybody that's mandating that they do that by law, whereas that is the case on the government side.

Attracting contracts

Gardner: Dave, it seems that if I were a public cloud provider, I couldn't think of a better customer, a better account in terms of size and longevity, than some major government agencies. What are we seeing from the cloud providers in trying to attract the government contracts and perhaps provide the level of interoperability and standardization that they require?

Linthicum: The big three -- Amazon, Google and Microsoft -- are really making an effort to get into that market. They all have federal sides to their house. People are selling into that space right now, and I think that they're seeing some progress. The FAA and certainly the DoD have been moving in that direction.

Linthicum
However, they do realize that they have to build a net new infrastructure, a net new way of doing procurement to get into that space. In the case where the US is building the world’s biggest private cloud at the CIA, they've had to change their technology around the needs of the government.

They see it as really the "Fortune 1." They see it as the largest opportunity that’s there, and they're willing to make huge investments in the billions of dollars to capture that market when it arrives.

Gardner: It seems to me, Chris, that we might be facing a situation where we have cloud providers offering a set of services to large government organizations, but perhaps a different set to the private sector. From an interoperability and standardization perspective, that doesn’t make much sense to me.

What’s your perspective on how public cloud services and standardization are shaping up? Where did you expect things to be at this point?

Harding: The government has an additional dimension to that of the private sector when it comes to procurement in terms of the need to be transparent and to be spending the money that’s entrusted to them by the public in a wise manner. One of the issues they have with a lack of standardization is that it makes it more difficult for them to show that they're visibly getting the best deals from the taxpayers when they come to procure cloud services.

Harding
In fact, The Open Group produced a guide to cloud computing for business a couple of years ago. One of the things that we argued in that was that, when procuring cloud services, the enterprise should model the use that it intends to make of the cloud services and therefore be able to understand the costs that they were likely to incur. This is perhaps more important for government, even more than it is for private enterprises. And you're right, the lack of standardization makes it more difficult for them to do this.

Gardner: Chris, do you think that interoperability is of a higher order of demand in public-sector cloud acquisition than in the private sector, or should there be any differentiation?

Need for interoperability

Harding: Both really have the need for interoperability. The public sector perhaps has a greater need, simply because it’s bigger than a small enterprise and it’s therefore more likely to want to use more cloud services in combination.

Gardner: We've certainly seen a lot of open-source platforms emerge in private cloud as well as hybrid cloud. Is that a driving force yet in the way that the public sector is looking at public cloud services acquisition? Is open source a guide to what we should expect in terms of interoperability and standardization in public-cloud services for eGovernment?

Szakal: Open source, from an application implementation point of view, is one of the questions you're asking, but are you also suggesting that somehow these cloud platforms will be reconsidered or implemented via open source? There's truth to both of those statements.

IBM is the number two cloud provider in the federal government space, if you look at hybrid and the commercial cloud for which we provide three major cloud environments. All of those cloud implementations are based on open source -- OpenStack and Cloud Foundry are key pieces of this -- as well as the entire DevOps lifecycle.
So, the economy of APIs and the creation of this composite services are going to be very, very important elements. If they're closed and not open to following the normal RESTful approaches defined by the W3C and other industry consortia, then it’s going to be difficult to create these composite clouds.

So, open source is important, but if you think of open source as a way to ensure interoperability, kind of what we call in The Open Group environment "Executable Standards," it is a way to ensure interoperability.

That’s more important at the cloud-stack level than it is between cloud providers, because between cloud providers you're really going to be talking about API-driven interoperability, and we have that down pretty well.

So, the economy of APIs and the creation of this composite services are going to be very, very important elements. If they're closed and not open to following the normal RESTful approaches defined by the W3C and other industry consortia, then it’s going to be difficult to create these composite clouds.

Gardner: We saw that OpenStack had its origins in a government agency, NASA. In that case, clearly a government organization, at least in the United States, was driving the desire for interoperability and standardization, a common platform approach. Has that been successful, Dave? Why wouldn’t the government continue to try to take that approach of a common, open-source platform for cloud interoperability?

Linthicum: OpenStack has had some fair success, but I wouldn’t call it excellent success. One of the issues is that the government left it dangling out there, and while using some aspects of it, I really expected them to make some more adoption around that open standard, for lots of reasons.

So, they have to hack the operating systems and meet very specific needs around security, governance, compliance, and things like that. They have special use cases, such as the DoD, weapons control systems in real time, and some IoT stuff that the government would like to move into. So, that’s out there as an opportunity.

In other words, the ability to work with some of the distros out there, and there are dozens of them, and get into a special government version of that operating system, which is supported openly by the government integrators and providers, is something they really should take advantage of. It hasn’t happened so far and it’s a bit disappointing.

Insight into Europe

Gardner: Do any of you have any insight into Europe and some of the government agencies there? They haven’t been shy in the past about mandating certain practices when it comes to public contracts for acquisition of IT services. I think cloud should follow the same path. Is there a big difference in what’s going on in Europe and in North America?

Szakal: I just got off the phone a few minutes ago with my counterpart in the UK. The nice thing about the way the UK government is approaching cloud computing is that they're trying to do so by taking the handcuffs off the vendors and making sure that they are standards-based. They're meeting a certain quality of services for them, but they're not mandating through policy and by law the structure of their cloud. So, it allows for us, at least within IBM, to take advantage of this incredible industry ecosystem you have on the commercial side, without having to consider that you might have to lift and shift all of this very expensive infrastructure over to these industry clouds.

The EU is, in similar ways, following a similar practice. Obviously, data sovereignty is really an important element for most governments. So, you see a lot of focus on data sovereignty and data portability, more so than we do around strict requirements in following a particular set of security controls or standards that would lock you in and make it more difficult for you to evolve over a period of time.
Register for
The Open Group Event
Next in Your Region
Gardner: Chris Harding, to Andras’ point about data interoperability, do you see that as a point on the arrow that perhaps other cloud interoperability standards would follow? Is that something that you're focused on more specifically than more general cloud infrastructure services?

Harding: Cloud is a huge spectrum, from the infrastructure services at the bottom,up to the business services, the application services, to software as a service (SaaS), and data interoperability sits on top of that stack.

I'm not sure that we're ready to get real data interoperability yet, but the work that's being done on trying to establish common frameworks for understanding data, for interpreting data, is very important as a basis for gaining interoperability at that level in the future.

We also need to bear in mind that the nature of data is changing. It’s no longer a case that all data comes from a SQL database. There are all sorts of ways in which data is represented, including human forms, such as text and speech, and interpreting those is becoming more possible and more important.

This is the exciting area, where you see the most interesting work on interoperability.

Gardner: Dave Linthicum, one of the things that some of us who have been proponents of cloud for a number of years now have looked to is the opportunity to get something that couldn’t have been done before, a whole greater than the sum of the parts.

It seems to me that if you have a common cloud fabric and the sufficient amount of interoperability for data and/or applications and infrastructure services and that cuts across both the public and the private sector, then this difficulty we've had with health insurance, payer and provider, interoperability and communication, sharing of government services, and data with the private sector, many of the things that have been probably blamed on bureaucracy and technical backwardness in some ways could be solved if there was a common public cloud approach adopted by the major public cloud providers. It seems to me a very significant benefit could be drawn when the public and private sector have a commonality that having your own data centers of the past just couldn't provide.

Am I chewing on too much pie in the sky here, Dave, or is there actually something to be said about the cloud model, not just between government to government agencies, but the public and private sectors?

Getting more savvy

Linthicum: The public-cloud providers out there, the big ones, are getting more savvy about providing interoperability, because they realized that it’s going to be multi-cloud. It’s going to be different private and public cloud instances, different kinds of technologies, that are there, and you have to work and play well with a number of different technologies.

However, to be a little bit more skeptical, over the years, I've found out that they're in it for their own selfish interests, and they should be, because they're corporations. They're going to basically try to play up their technology to get into a market and hold on to the market, and by doing that, they typically operate against interoperability. They want to make it as difficult as possible to integrate with the competitors and leverage their competitors’ services.

So, we have that kind of dynamic going on, and it’s incredibly frustrating, because we can certainly stand up, have the discussion, and reveal the concepts. You just did a really good job in revealing that this has been Nirvana, and we should start moving in this direction. You will typically get lots of head-nodding from the public-cloud providers and the private-cloud providers but actions speak louder than words, and thus far, it’s been very counterproductive.

Interoperability is occurring but it’s in dribs and drabs and nothing holistic.

Gardner: Chris, it seems as if the earlier you try to instill interoperability and standardization both in technical terms, as well as methodological, that you're able to carry that into the future where we don't repave cow paths, but we have highly non-interoperable data centers replaced by them being in the cloud, rather than in some building that you control.
The public-cloud providers out there, the big ones, are getting more savvy about providing interoperability, because they realized that it’s going to be multi-cloud.

What do you think is going to be part of the discussion at The Open Group Paris Event, October 24, around some of these concepts of eGovernment? Shouldn’t they be talking about trying to make interoperability something that's in place from the start, rather than something that has to be imposed later in the process?

Harding: Certainly this will be an important topic at the forthcoming Paris event. My personal view is that the question of when you should standardize something to gain interoperability is a very difficult balancing act. If you do it too late, then you just get a mess of things that don’t interoperate, but equally, if you try to introduce standards before the market is ready for them, you generally end up with something that doesn’t work, and you get a mess for a different reason.

Part of the value of industry events, such as The Open Group events, is for people in different roles in different organizations to be able to discuss with each other and get a feel for the state of maturity and the directions in which it's possible to create a standard that will stick. We're seeing a standard paradigm, the API paradigm, that was mentioned earlier. We need to start building more specific standards on top of those, and certainly in Paris and at future Open Group events, those are the things we'll be discussing.

Gardner: Andras, you wear a couple of different hats. One is the Chief Technology Officer at IBM US Federal, but you're also very much involved with The Open Group. I think you're on the Board of Directors. How do you see this progression of what The Open Group has been able to do in other spheres around standardization and both methodological, such as an enterprise architecture framework, TOGAF®, an Open Group standard,, as well as the implementation enforcement of standards? Is what The Open Group has done in the past something you expect to be applicable to these cloud issues?

Szakal: IBM has a unique history, being one of the only companies in the technology arena. It’s over a 100-years-old and has been able to retain great value to its customers over that long period of time, and we shifted from a fairly closed computing environment to this idea of open interoperability and freedom of choice.

That's our approach for our cloud environment as well. What drives us in this direction is because our customers require it from IBM, and we're a common infrastructure and a glue that binds together many of our enterprise and the largest financial banking and healthcare institutions in the world to ensure that they can interoperate with other vendors.

As such, we were one of the founders of The Open Group, which has been at the forefront of helping facilitate this discussion about open interoperability. I'm totally with Chris as to when you would approach that. As I said before, my concern is that you interoperate at the service level in the economy of APIs. That would suggest that there are some other elements for that, not just the API itself, but the ability to effectively manage credentials, security, or some other common services, like being able to manage object stores to the place that you would like to be able to store your information, so that data sovereignty isn’t an issue. These are all the things that will occur over a period of time.

Early days

It’s early, heady days in the cloud world, and we're going to see all of that goodness come to pass here as we go forward. In reality, we talk about cloud it as if it’s a thing. It’s true value isn't so much in the technology, but in creating these new disruptive business capabilities and business models. Openness of the cloud doesn’t facilitate that creation of those new business models.

That’s where we need to focus. Are we able to actually drive these new collaborative models with our cloud capabilities? You're going to be interoperating with many CSPs not just two, three, or four, especially as you see different factors grow into the cloud. It won’t matter where they operate their cloud services from; it will matter how they actually interoperate at that API level.

Gardner: It certainly seems to me that the interoperability is the killer application of the cloud. It can really foster greater inter-department collaboration and synergy, government to government, state to federal, across the EU, for example as well, and then also to the private sector, where you have healthcare concerns and you've got monetary and banking and finance concerns all very deeply entrenched in both public and private sectors. So, we hope that that’s where the openness leads to.
It won’t matter where they operate their cloud services from; it will matter how they actually interoperate at that API level.

Chris, before we wrap up, it seems to me that there's a precedent that has been set successfully with The Open Group, when it comes to security. We've been able to do some pretty good work over the past several years with cloud security using the adoption of standards around encryption or tokenization, for example. Doesn’t that sort of give us a path to greater interoperability at other levels of cloud services? Is security a harbinger of things to come?

Harding: Security certainly is a key aspect that needs to be incorporated in the standards where we build on the API paradigm. But, some people talk about move to digital transformation, the digital enterprise. So, cloud and other things like IoT, big-data analysis, and so on are all coming together, and a key underpinning requirement for that is platform integration. That's where the Open Platform 3.0™ Forum of The Open Group is centering on the possibilities for platform interoperability to enable digital platform integration. Security is a key aspect of that, but there are other aspects too.

Gardner: I am afraid we will have to leave it there. We've been discussing the latest developments in eGovernment and cloud adoption with a panel of experts. Our focus on these issues comes in conjunction with The Open Group Paris Event and Member Meeting, October 24-27, 2016 in Paris, France, and there is still time to register.

So please check out The Open Group website at www.opengroup.org for more information on that event, and many others coming in the future.

With that, I'd like to thank our guests, Dr. Chris Harding, Director for Interoperability at The Open Group; David Linthicum, Senior Vice President at Cloud Technology Partners, and Andras Szakal, Vice President and Chief Technology Officer at IBM US Federal.
Register for
The Open Group Event
Next in Your Region
And a big thank you as well to The Open Group for sponsoring this discussion, and lastly, thank you to our audience for joining us on this BriefingsDirect panel discussion. This is Dana Gardner; Principal Analyst at Interarbor Solutions, your host and moderator. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: The Open Group.

Transcript of a panel discussion with members of The Open Group on the latest developments in eGovernment and cloud adoption. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in: