Friday, February 15, 2013

Big Data Success Depends on Better Risk Management Practices Like FAIR, Say The Open Group Panelists

Transcript of a BriefingsDirect podcast on best managing the risks from expanded use and distribution of big data enterprise assets.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hello, and welcome to a special BriefingsDirect thought leadership interview series coming to you in conjunction with The Open Group Conference on January 28 in Newport Beach, California.

Gardner
I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host and moderator throughout these business transformation discussions. The conference itself is focusing on "big data -- the transformation we need to embrace today."

We're here now with a panel of experts to explore new trends and solutions in the area of risk management and analysis. We'll learn how large enterprises are delivering risk assessments and risk analysis, and we'll see how big data can be both an area to protect, but also used as a tool for better understanding and mitigating risks.

With that, please join me in welcoming our panel, Jack Freund, PhD, the Information Security Risk Assessment Manager at TIAA-CREF. Welcome, Jack.

Jack Freund: Hello Dana, how are you?

Gardner: I'm great. Glad you could join us.

We are also here with Jack Jones, Principal of CXOWARE. He has more than nine years experience as a Chief Information Security Officer (CISO), and is the inventor of the Factor Analysis Information Risk  (FAIR) framework. Welcome, Jack.

Jack Jones: Thank you.

And we're also here with Jim Hietala, Vice President, Security for The Open Group. Welcome, Jim.

Jim Hietala: Thanks, Dana.

Gardner: Why is the issue of risk analysis so prominent now? What's different from, say, five years ago?

Jones: The information security industry has struggled with getting the attention of and support from management and businesses for a long time, and it has finally come around to the fact that the executives care about loss exposure -- the likelihood of bad things happening and how bad those things are likely to be.

It's only when we speak in those terms of risk that we make sense to those executives. And once we do that, we begin to gain some credibility and traction in terms of getting things done.

Gardner: So we really need to talk about this in the terms that a business executive would appreciate, not necessarily an IT executive.

Effects on business

Jones: Absolutely. They're tired of hearing about vulnerabilities, hackers, and that sort of thing. It’s only when we can talk in terms of the effect on the business that it makes sense to them.

Gardner: Jack Freund, I should also point out that you have more than 14 years in enterprise IT experience. You're a visiting professor at DeVry University and you chair a risk-management subcommittee for ISACA. Do you agree?

Freund: The problem that we have as a profession, and I think it’s a big problem, is that we have allowed ourselves to escape the natural trend that the other IT professionals have already taken.

Freund
There was a time, years ago, when you could code in the basement, and nobody cared much about what you were doing. But now, largely speaking, developers and systems administrators are very focused on meeting the goals of the organization.

Security has been allowed to miss that boat a little. We have been allowed to hide behind this aura of a protector and of an alerter of terrible things that could happen, without really tying ourselves to the problem that the organizations are facing and how can we help them succeed in what they're doing.

Gardner: Jim Hietala, how do you see things that are different now than a few years ago when it comes to risk assessment?

Hietala: There are certainly changes on the threat side of the landscape. Five years ago, you didn’t really have hacktivism or this notion of an advanced persistent threat (APT). That highly skilled attacker taking aim at governments and large organizations didn’t really exist -– or didn’t exist to the degree it does today. So that has changed.

Hietala
You also have big changes to the IT platform landscape, all of which bring new risks that organizations need to really think about. The mobility trend, the cloud trend, the big-data trend that we are talking about today, all of those things bring new risk to the organization.

As Jack Jones mentioned, business executives don't want to hear about, "I've got 15 vulnerabilities in the mobility part of my organization." They want to understand what’s the risk of bad things happening because of mobility, what we're doing about it, and what’s happening to risk over time.

So it’s a combination of changes in the threats and attackers, as well as just changes to the IT landscape, that we have to take a different look at how we measure and present risk to the business.

Gardner: Because we're at a big-data conference, do you share my perception, Jack Jones, that big data can be a source of risk and vulnerability, but also the analytics and the business intelligence (BI) tools that we're employing with big data can be used to alert you to risks or provide a strong tool for better understanding your true risk setting or environment?

Crown jewels

Jones: You are absolutely right. You think of big data and, by definition, it’s where your crown jewels, and everything that leads to crown jewels from an information perspective, are going to be found. It's like one-stop shopping for the bad guy, if you want to look at it in that context. It definitely needs to be protected. The architecture surrounding it and its integration across a lot of different platforms and such, can be leveraged and probably result in a complex landscape to try and secure.

Jones
There are a lot of ways into that data and such, but at least if you can leverage that same big data architecture, it's an approach to information security. With log data and other threat and vulnerability data and such, you should be able to make some significant gains in terms of how well-informed your analyses and your decisions are, based on that data.

Gardner: Jack Freund, do you share that? How does big data fit into your understanding of the evolving arena of risk assessment and analysis?

Freund: If we fast-forward it five years, and this is even true today, a lot of people on the cutting edge of big data will tell you the problem isn’t so much building everything together and figuring out what it can do. They are going to tell you that the problem is what we do once we figure out everything that we have. This is the problem that we have traditionally had on a much smaller scale in information security. When everything is important, nothing is important.

Gardner: To follow up on that, where do you see the gaps in risk analysis in large organizations? In other words, what parts of organizations aren’t being assessed for risk and should be?

Freund: The big problem that exist largely today in the way that risk assessments are done, is the focus on labels. We want to quickly address the low, medium, and high things and know where they are. But the problem is that there are inherent problems in the way that we think about those labels, without doing any of the analysis legwork.
We end up with these very long lists of horrible, terrible things that can be done to us in all sorts of different ways, without any relevance to the overall business of the organization.

I think that’s what’s really missing is that true analysis. If the system goes offline, do we lose money? If the system becomes compromised, what are the cost-accounting things that will happen that allow us to figure out how much money we're going to lose.

That analysis work is largely missing. That’s the gap. The gap is if the control is not in place, then there’s a risk that must be addressed in some fashion. So we end up with these very long lists of horrible, terrible things that can be done to us in all sorts of different ways, without any relevance to the overall business of the organization.

Every day, our organizations are out there selling products, offering services, which is  and of itself, its own risky venture. So tying what we do from an information security perspective to that is critical for not just the success of the organization, but the success of our profession.

Gardner: So we can safely say that large companies are probably pretty good at a cost-benefit analysis or they wouldn't be successful. Now, I guess we need to ask them to take that a step further and do a cost-risk analysis, but in business terms, being mindful that their IT systems might be a much larger part of that than they had at once considered. Is that fair, Jack?

Risk implications

Jones: Businesses have been making these decisions, chasing the opportunity, but generally, without any clear understanding of the risk implications, at least from the information security perspective. They will have us in the corner screaming and throwing red flags in there, and talking about vulnerabilities and threats from one thing or another.

But, we come to the table with red, yellow, and green indicators, and on the other side of the table, they’ve got numbers. Well, here is what we expect to earn in revenue from this initiative, and the information security people are saying it’s crazy. How do you normalize the quantitative revenue gain versus red, yellow, and green?

Gardner: Jim Hietala, do you see it in the same red, yellow, green or are there some other frameworks or standard methodologies that The Open Group is looking at to make this a bit more of a science?

Hietala: Probably four years ago, we published what we call the Risk Taxonomy Standard which is based upon FAIR, the management framework that Jack Jones invented. So, we’re big believers in bringing that level of precision to doing risk analysis. Having just gone through training for FAIR myself, as part of the standards effort that we’re doing around certification, I can say that it really brings a level of precision and a depth of analysis to risk analysis that's been lacking frequently in IT security and risk management.
In order to be successful sitting between these two groups, you have to be able to speak the language of both of those groups.

Gardner: We’ve talked about how organizations need to be mindful that their risks are higher and different than in the past and we’ve talked about how standardization and methodologies are important, helping them better understand this from a business perspective, instead of just a technology perspective.

But, I'm curious about a cultural and organizational perspective. Whose job should this fall under? Who is wearing the white hat in the company and can rally the forces of good and make all the bad things managed? Is this a single person, a cultural, an organizational mission? How do you make this work in the enterprise in a real-world way?

Freund: The profession of IT risk management is changing. That profession will have to sit between the business and information security inclusive of all the other IT functions that make that happen.

In order to be successful sitting between these two groups, you have to be able to speak the language of both of those groups. You have to be able to understand profit and loss and capital expenditure on the business side. On the IT risk side, you have to be technical enough to do all those sorts of things.

But I think the sum total of those two things is probably only about 50 percent of the job of IT risk management today. The other 50 percent is communication. Finding ways to translate that language and to understand the needs and concerns of each side of that relationship is really the job of IT risk management.

To answer your question, I think it’s absolutely the job of IT risk management to do that. From my own experiences with the FAIR framework, I can say that using FAIR is the Rosetta Stone for speaking between those two groups.

Necessary tools

It gives you the tools necessary to speak in the insurance and risk terms that business appreciate. And it gives you the ability to be as technical and just nerdy, if you will, as you need to be in order to talk to IT security and the other IT functions in order to make sure everybody is on the same page and everyone feels like their concerns are represented in the risk-assessment functions that are happening.

Jones: I agree with what Jack said wholeheartedly. I would add, though, that integration or adoption of something like this is a lot easier the higher up in the organization you go.

For CFOs traditionally, their neck is most clearly on the line for risk-related issues within most organizations. At least in my experience, if you get their ear on this and present the information security data analyses to them, they jump on board, they drive it through the organization, and it's just brain-dead easy.

If you try to drive it up through the ranks, maybe you get an enthusiastic supporter in the information security organization, especially if it's below the CISO level, and they try a grassroots sort of effort to bring it in, it's a tougher thing. It can still work. I've seen it work very well, but, it's a longer row to hoe.

Gardner: There have been a lot of research, studies, and surveys on data breaches. What are some of the best sources, or maybe not so good sources, for actually measuring this? How do you know if you’re doing it right? How do you know if you're moving from yellow to green, instead of to red?
Becoming very knowledgeable about the risk posture and the risk tolerance of the organization is a key.

Freund: There are a couple of things in that question. The first is there's this inherent assumption in a lot of organizations that we need to move from yellow to green, and that may not be the case. So, becoming very knowledgeable about the risk posture and the risk tolerance of the organization is a key.

That's part of the official mindset of IT security. When you graduate an information security person today, they are minted knowing that there are a lot of bad things out there, and their goal in life is to reduce them. But, that may not be the case. The case may very well be that things are okay now, but we have bigger things to fry over here that we’re going to focus on. So, that's one thing.

The second thing, and it's a very good question, is how we know that we’re getting better? How do we trend that over time? Overall, measuring that value for the organization has to be able to show a reduction of a risk or at least reduction of risk to the risk-tolerance levels of the organization.

Calculating and understanding that requires something that I always phrase as we have to become comfortable with uncertainty. When you are talking about risk in general, you're talking about forward-looking statements about things that may or may not happen. So, becoming comfortable with the fact that they may or may not happen means that when you measure them today, you have to be willing to be a little bit squishy in how you’re representing that.

In FAIR and in other academic works, they talk about using ranges to do that. So, things like high, medium ,and low, could be represented in terms of a minimum, maximum, and most likely. And that tends to be very, very effective. People can respond to that fairly well.

Gathering data

Jones: With regard to the data sources, there are a lot of people out there doing these sorts of studies, gathering data. The problem that's hamstringing that effort is the lack of a common set of definitions, nomenclature, and even taxonomy around the problem itself.

You will have one study that will have defined threat, vulnerability, or whatever differently from some other study, and so the data can't be normalized. It really harms the utility of it. I see data out there and I think, "That looks like that can be really useful." But, I hesitate to use it because I don't understand. They don't publish their definitions, approach, and how they went after it.

There's just so much superficial thinking in the profession on this that we now have dug under the covers. Too often, I run into stuff that just can't be defended. It doesn’t make sense, and therefore the data can't be used. It's an unfortunate situation.

I do think we’re heading in a positive direction. FAIR can provide a normalizing structure for that sort of thing. The VERIS framework, which by the way, is also derived in part from FAIR, also has gained real attraction in terms of the quality of the research they have done and the data they’re generating. We’re headed in the right direction, but we’ve got a long way to go.

Gardner: Jim Hietala, we’re seemingly looking at this on a company-by-company basis. But, is there a vertical industry slice or industry-wide slice where we could look at what's happening to everyone and put some standard understanding, or measurement around what's going on in the overall market, maybe by region, maybe by country?
The ones that have embraced FAIR tend to be the ones that overall feel that risk is an integral part of their business strategy.

Hietala: There are some industry-specific initiatives and what's really needed, as Jack Jones mentioned, are common definitions for things like breach, exposure, loss, all those, so that the data sources from one organization can be used in another, and so forth. I think about the financial services industry. I know that there is some information sharing through an organization called the FS-ISAC about what's happening to financial services organizations in terms of attacks, loss, and those sorts of things.

There's an opportunity for that on a vertical-by-vertical basis. But, like Jack said, there is a long way to go on that. In some industries, healthcare for instance, you are so far from that, it's ridiculous. In the US here, the HIPAA security rule says you must do a risk assessment. So, hospitals have done annual risk assessments, will stick the binder on the shelf, and they don't think much about information security in between those annual risk assessments. That's a generalization, but various industries are at different places on a continuum of maturity of their risk management approaches.

Gardner: As we get better with having a common understanding of the terms and the measurements and we share more data, let's go back to this notion of how to communicate this effectively to those people that can use it and exercise change management as a result. That could be the CFO, the CEO, what have you, depending on the organization.

Do you have any examples? Can we look to an organization that's done this right, and examine their practices, the way they’ve communicated it, some of the tools they’ve used and say, "Aha, they're headed in the right direction maybe we could follow a little bit." Let's start with you, Jack Freund.

Freund: I have worked and consulted for various organizations that have done risk management at different levels. The ones that have embraced FAIR tend to be the ones that overall feel that risk is an integral part of their business strategy. And I can give a couple of examples of scenarios that have played out that I think have been successful in the way they have been communicated.

Coming to terms

The key to keep in mind with this is that one of the really important things is that when you're a security professional, you're again trained to feel like you need results. But, the results for the IT risk management professional are different. The results are "I've communicated this effectively, so I am done." And then whatever the results are, are the results that needed to be. And that's a really hard thing to come to terms with.

I've been involved in large-scale efforts to assess risk for a cloud venture. We needed to move virtually every confidential record that we have to the cloud in order to be competitive with the rest of our industry. If our competitors are finding ways to utilize the cloud before us, we can lose out. So, we need to find a way to do that, and to be secure and compliant with all the laws and regulations and such.

Through that scenario, one of the things that came out was that key ownership became really, really important. We had the opportunity to look at the various control structures and we analyzed them using FAIR. What we ended up with was sort of a long-tail risk. Most people will probably do their job right over a long enough period of time. But, over that same long period of time, the odds of somebody making a mistake not in your favor are probably likely, but, not significantly enough so that you can't make the move.

But, the problem became that the loss side, the side that typically gets ignored with traditional risk-assessment methodologies, was so significant that the organization needed to make some judgment around that, and they needed to have a sense of what we needed to do in order to minimize that.

That became a big point of discussion for us and it drove the conversation away from bad things could happen. We didn’t bury the lead. The lead was that this is the most important thing to this organization in this particular scenario.
Through that scenario, one of the things that came out was that key ownership became really, really important.

So, let's talk about things we can do. Are we comfortable with it? Do we need to make any sort of changes? What are some control opportunities? How much do they cost? This is a significantly more productive conversation than just, "Here is a bunch of bad things that happen. I'm going to cross my arms and say no."

Gardner: Jack Jones, examples at work?

Jones: In an organization that I've been working with recently, their board of directors said they wanted a quantitative view of information security risk. They just weren’t happy with the red, yellow, green. So, they came to us, and there were really two things that drove them there. One was that they were looking at cyber insurance. They wanted to know how much cyber insurance they should take out, and how do you figure that out when you've got a red, yellow, green scale?

They were able to do a series of analyses on a population of the scenarios that they thought were relevant in their world, get an aggregate view of their annualized loss exposure, and make a better informed decision about that particular problem.

Gardner: I'm curious how prevalent cyber insurance is, and is that going to be a leveling effect in the industry where people speak a common language the equivalent of actuarial tables, but for security in enterprise and cyber security?

Jones: One would dream and hope, but at this point, what I've seen out there in terms of the basis on which insurance companies are setting their premiums and such is essentially the same old “risk assessment” stuff that the industry has been doing poorly for years. It's not based on data or any real analysis per se, at least what I’ve run into. What they do is set their premiums high to buffer themselves and typically cover as few things as possible. The question of how much value it's providing the customers becomes a problem.

Looking to the future

Gardner: We’re coming up on our time limit. So, let's quickly look to the future. Is there such thing as risk management as a service? Can we outsource this? Is there a way in which moving more of IT into cloud or hybrid models would mitigate risk, because the cloud provider would standardize? Then, many players in that environment, those who were buying those services, would be under that same umbrella? Let's start with you Jim Hietala. What's the future of this and what do the cloud trends bring to the table?

Hietala: I’d start with a maxim that comes out of the financial services industry, which is that you can outsource the function, but you still own the risk. That's an unfortunate reality. You can throw things out in the cloud, but it doesn’t absolve you from understanding your risk and then doing things to manage it to transfer it if there's insurance or whatever the case may be.

That's just a reality. Organizations in the risky world we live in are going to have to get more serious about doing effective risk analysis. From The Open Group standpoint, we see this as an opportunity area.
Risk is a system of systems. There are a series of pressures that are applied, and a series of levers that are thrown in order to release that sort of pressure.

As I mentioned, we’ve standardized the taxonomy piece of the Factor Analysis Information Risk  (FAIR) framework. And we really see an opportunity around the profession going forward to help the risk-analysis community by further standardizing FAIR and launching a certification program for a FAIR-certified risk analyst. That's in demand from large organizations that are looking for evidence that people understand how to apply FAIR and use it in doing risk analyses.

Gardner: Jack Freund, looking into your crystal ball, how do you see this discipline evolving?

Freund: I always try to consider things as they exist within other systems. Risk is a system of systems. There are a series of pressures that are applied, and a series of levers that are thrown in order to release that sort of pressure.

Risk will always be owned by the organization that is offering that service. If we decide at some point that we can move to the cloud and all these other things, we need to look to the legal system. There is a series of pressures that they are going to apply, and who is going to own that, and how that plays itself out.

If we look to the Europeans and the way that they’re managing risk and compliance, they’re still as strict as we in United States think that they may be about things, but  there's still a lot of leeway in a lot of the ways that laws are written. You’re still being asked to do things that are reasonable. You’re still being asked to do things that are standard for your industry. But, we'd still like the ability to know what that is, and I don't think that's going to go away anytime soon.

Judgment calls

We’re still going to have to make judgment calls. We’re still going to have to do 100 things with a budget for 10 things. Whenever that happens, you have to make a judgment call. What's the most important thing that I care about? And that's why risk management exists, because there’s a certain series of things that we have to deal with. We don't have the resources to do them all, and I don't think that's going to change over time. Regardless of whether the landscape changes, that's the one that remains true.

Gardner: It sounds as if we’re continuing down the path of being mostly reactive. Is there anything you can see on the horizon that would perhaps tip the scales, so that the risk management and analysis practitioners can really become proactive and head things off before they become a big problem?

Jones: If we were to take a snapshot at any given point in time of an organization’s loss exposure, how much risk they have right then, that's a lagging indicator of the decisions they’ve made in the past, and their ability to execute against those decisions.

We can do some great root-cause analysis around that and ask how we got there. But, we can also turn that coin around and ask how good we are at making well-informed decisions, and then executing against them, the asking what that implies from a risk perspective downstream.

If we understand the relationship between our current state, and past and future states, we have those linkages defined, especially, if we have an analytic framework underneath it. We can do some marvelous what-if analysis.
We’re still going to have to make judgment calls. We’re still going to have to do 100 things with a budget for 10 things.

What if this variable changed in our landscape? Let's run a few thousand Monte Carlo simulations against that and see what comes up. What does that look like? Well, then let's change this other variable and then see which combination of dials, when we turn them, make us most robust to change in our landscape.

But again, we can't begin to get there, until we have this foundational set of definitions, frameworks, and such to do that sort of analysis. That's what we’re doing with the Factor Analysis Information Risk  (FAIR) framework, but without some sort of framework like that, there's no way you can get there.

Gardner: I am afraid we’ll have to leave it there. We’ve been talking with a panel of experts on how new trends and solutions are emerging in the area of risk management and analysis. And we’ve seen how new tools for communication and using big data to understand risks are also being brought to the table.

This special BriefingsDirect discussion comes to you in conjunction with The Open Group Conference in Newport Beach, California. I'd like to thank our panel: Jack Freund, PhD, Information Security Risk Assessment Manager at TIAA-CREF. Thanks so much Jack.

Freund: Thank you, Dana.

Gardner: We’ve also been speaking with Jack Jones, Principal at CXOWARE.

Jones: Thank you. Thank you, pleasure to be here.

Gardner: And last, Jim Hietala, the Vice President for Security at The Open Group. Thanks.

Hietala: Thanks, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions; your host and moderator through these thought leadership interviews. Thanks again for listening and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: The Open Group.

Transcript of a BriefingsDirect podcast on best managing the risks from expanded use and distribution of big data enterprise assets. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Tuesday, January 29, 2013

AT&T Cloud Services Built on VMware vCloud Datacenter Meet Evolving Business Demands for Advanced IaaS

Transcript of a BriefingsDirect podcast on how telecom giant AT&T is leveraging its networking and cloud expertise to provide advanced cloud services.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Dana Gardner
Today, we present a sponsored podcast discussion on how global telecommunications giant AT&T has created advanced cloud services for its customers. We'll see how AT&T has developed the ability to provide virtual private clouds and other computing capabilities as integrated services at scale.

Stay with us now to learn more about building the best infrastructure to handle some of the most demanding network and compute services for one of the world's largest service providers. Here to share her story on building top-performing infrastructure is Chris Costello, Assistant Vice President of AT&T Cloud Services. Welcome, Chris. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Chris Costello: Thank you, Dana.

Gardner: Just to help us understand, because it's such a large company and you provide so many services, what cloud services generally is AT&T providing now, and why is this an important initiative for you?

Costello: AT&T has been in the hosting business for over 15 years, and so it was only a natural extension for us to get into the cloud services business to evolve with customers' changing business demands and technology needs.

Chris Costello
We have cloud services in several areas. The first is our AT&T Synaptic Compute as a Service. This is a hybrid cloud that allows VMware clients to extend their private clouds into AT&T's network-based cloud using a virtual private network (VPN). And it melds the security and performance of VPN with the economics and flexibility of a public cloud. So the service is optimized for VMware's more than 350,000 clients.

If you look at customers who have internal clouds today or private data centers, they like the control, the security, and the leverage that they have, but they really want the best of both worlds. There are certain workloads where they want to burst into a service provider’s cloud.

We give them that flexibility, agility, and control, where they can simply point and click, using free downloadable tools from VMware, to instantly turn up workloads into AT&T's cloud.

Another capability that we have in this space is AT&T Platform as a Service. This is targeted primarily to independent software vendors (ISVs), IT leaders, and line-of-business managers. It allows customers to choose from 50 pre-built applications, instantly mobilize those applications, and run them in AT&T's cloud, all without having to write a single line of code.

So we're really starting to get into more of the informal buyers, those line-of-business managers, and IT managers who don't have the budget to build it all themselves, or don't have the budget to buy expensive software licenses for certain application environments.

Examples of some of the applications that we support with our platform as a service (PaaS) are things like salesforce automation, quote and proposal tools, and budget management tools.

Storage space

The third key category of AT&T's Cloud Services is in the storage space. We have our AT&T Synaptic Storage as a Service, and this gives customers control over storage, distribution, and retrieval of their data, on the go, using any web-enabled device. In a little bit, I can get into some detail on use cases of how customers are using our cloud services.

This is a very important initiative for AT&T. We're seeing customer demand of all shapes and sizes. We have a sizable business and effort supporting our small- to medium-sized business (SMB) customers, and we have capabilities that we have tailor-developed just to reach those markets.

As an example, in SMB, it's all about the bundle. It's all about simplicity. It's all about on demand. And it's all about pay per use and having a service provider they can trust.
It's all about simplicity. It's all about on demand.

In the enterprise space, you really start getting into detailed discussions around security. You also start getting into discussions with many customers who already have private networking solutions from AT&T that they trust. When you start talking with clients around the fact that they can run a workload, turn up a server in the cloud, behind their firewall, it really resonates with CIOs that we're speaking with in the enterprise space.

Also in enterprises, it's about having a globally consistent experience. So as these customers are reaching new markets, it's all about not having to stand up an additional data center, compute instance, or what have you, and having a very consistent experience, no matter where they do business, anywhere in the world.

New era for women in tech

Gardner: Let’s look into your role Chris as an IT executive and also a woman. The fact is that a significant majority of CIOs and IT executives are men, and that’s been the case for quite some time. But I'm curious, does cloud computing and the accompanying shift towards IT becoming more of a services brokering role change that? Do you think that with the consensus building among businesses and partner groups being more important in that brokering role, this might bring in a new era for women in tech?

Costello: I think it is a new era for women in tech. Specifically to my experience in working at AT&T in technology, this company has really provided me with an opportunity to grow both personally and professionally.

I currently lead our Cloud Office at AT&T and, prior to that, ran AT&T’s global managed hosting business across our 38 data centers. I was also lucky enough to be chosen as one of the top women in wireline services.
The key to success of being a woman working in technology is being able to build offers that solve customers' business problem.

What drives me as a woman in technology is that I enjoy the challenge of creating offers that meet customer needs, whether they be in the cloud space, things like driving eCommerce, high performance computing environment, or disaster recovery (DR) solutions.

I love spending time with customers. That’s my favorite thing to do. I also like to interact with many partners and vendors that I work with to stay current on trends and technologies. The key to success of being a woman working in technology is being able to build offers that solve customers' business problem, number one.

Number two is being able to then articulate the value of a lot of the complexity around some of these solutions, and package the value in a way that’s very simple for customers to understand.

Some of the challenge and also opportunity of the future is that, as technology continues to evolve, it’s about reducing complexity for customers and making the service experience seamless. The trend is to deliver more and more finished services, versus complex infrastructure solutions.

Gardner: It’s a very interesting period. Do you have any sense of a future direction in terms of IT roles? Does the actual role, whether it’s a man or woman, shift? The leadership in IT, how is that changing?

Costello: I've been in the technology space for a number of years at AT&T and I've had the opportunity to interact with many women in leadership, whether they be my peer group, managers that work as a part of my team, and/or mentors that I have within AT&T that are senior leaders within the business.

I've worked with several women in leadership. I think that trend is going to continue. I also mentor three women at AT&T, whether they be in technology, sales, or an operations role. So I'm starting to see this trend continue to grow.
It enables us to deliver a mobile cloud as well. That helps customers to transform their businesses.

Gardner: You have a lot of customers who are already using your network services. It seems a natural extension for them to look to you for cloud, and now you have created these, as I have seen it termed, virtual private clouds.

From what you're describing, that allows folks to take whatever cloud activities they've got and be able to burst those into your cloud, and that gives them that elasticity. I imagine there are probably some good cost-efficiencies as well.

Costello: Absolutely. We've embedded cloud capabilities into the AT&T managed network. It enables us to deliver a mobile cloud as well. That helps customers to transform their businesses. We're delivering cloud services in the same manner as voice and data services, intelligently routed across our highly secure, reliable network.

AT&T's cloud is embedded in our network. It's not sitting on top of or attached to our network, but it's fully integrated to provide customers a seamless, highly secure, low-latency, and high-performing experience.

Gardner: Let’s look into the VMware solution set, and why you chose VMware. Maybe you can explain the process. Was this a data-driven decision? Was this a pure architecture? Were there other technology or business considerations? I'm just trying to better understand the lead-up to using vCloud Datacenter Services as a core to the AT&T Synaptic Compute as a Service. 

Multiple uses

Costello: AT&T uses VMware in several of our hosting application and cloud solutions today. In the case of AT&T Synaptic Compute as a Service, we use that in several ways, both to serve customers in public cloud and hybrid, as well as private cloud solutions.

We've also been using VMware technology for a number of years in AT&T’s Synaptic Hosting offer, which is our enterprise-grade utility computing service. We've also been serving customers with server virtualization solutions available in AT&T data centers around the world and also can be extended into customer or third-party locations.

Just to drill down on some of the key differentiators of AT&T Synaptic Compute as a Service, it’s two-fold.

One is that we integrate with AT&T private networking solutions. Some of the benefits that customers enjoy as a result of that are orchestration of resources, where we'll take the amount of compute storage and networking resources and provide the exact amount of resources at the exact right time to customers on-demand.

Our solutions offer enterprise-grade security. The fact that we've integrated our AT&T Synaptic Compute as a Service with private networking solution allows customers to extend their cloud into our network using VPN.
An engineering firm can now perform complex mathematical computations and extend from their private cloud into AT&T’s hybrid solution instantaneously, using their native VMware toolset.

Let me touch upon VMware vCloud Datacenter Services for a minute. We think that’s another key differentiator for us, in that we can allow clients to seamlessly move workloads to our cloud using native VMware toolsets. Essentially, we're taking technical complexity and interoperability challenges off the table.

How this manifests itself in terms of client solutions is that an engineering firm can now perform computationally intensive complex mathematical modeling on the fly and on demand using AT&T Synaptic Compute as a Service.

Medical firms can use our solutions for medical imaging to securely store and access x-rays. Companies that are interested in mobile cloud solution can use AT&T’s Mobile Enterprise Application Platform to offer product catalogs in the cloud with mobile access.

Gardner: It certainly appears to me that we're going to be finding a lot more ways in which the private cloud infrastructure in these organizations can synergistically add value and benefit from public cloud services.

Cloud interaction

Even though we want to distill out the complexities, there’s something about the interactions between the private cloud and the enterprise and the public cloud services, like AT&T, that depend on some sort of a core architecture. How are you looking at making that visible? What are some of the important requirements that you have for making this hybrid cloud capability work?

Costello: One of the requirements for a hybrid cloud solution to be a success, specifically in terms of how AT&T offers the service, is that we have a large base of customers that have private networking solutions with AT&T, and they view their networks as secure and scalable.

Many of our customers that we have today have been using these networks for many years. And as customers are looking to cloud solutions to evolve their data centers and their application environment, they're demanding that the solution be secure and scalable.  So the fact that we let customers extend their private cloud and instantly access our cloud environment over their private network is key, especially when it comes to enterprise customers.

Secondly, with the vCloud Datacenter program that we are part of with VMware, letting customers have access to copy and paste workloads and see all of their virtual machines, whether it be in their own private cloud environment or in a hybrid solution provided by AT&T, providing that seamless access to view all of their virtual machines and manage those through single interface, is key in reducing technical complexity and speeding time to market.

Gardner: I should also think that these concepts around the software-defined datacenter and software-defined networking play a part in that, is that something that you are focused on?
If we start with enterprise, the security aspects of the solution had to prove out for the customers that we do business with.

Costello: Software-defined datacenter and software-defined networks are essentially what we're talking about here with some uniqueness that AT&T Labs has built within our networking solutions. We essentially take our edge, our edge routers, and the benefits that are associated with AT&T networking solutions around redundancy, quality of service, etc., and extend that into cloud solutions, so customers can extend their cloud into our network using VPN solutions.

Gardner: As you moved toward this really important initiative, what were some of the other requirements you had in terms of functionality for your infrastructure? What were you really looking for?

Costello: In terms of functionality for the infrastructure, if we start with enterprise, the security aspects of the solution had to prove out for the customers that we do business with. When you think about clients in financial services, the federal government, and healthcare, as examples, we really had to prove that the data was secure and private. The certifications and audits and compliance that we were able to provide for our customers were absolutely critical to earning customers’ business.

We're seeing more and more customers, who have had very large IT shops in the past, who are now opening the door and are very open to these discussions, because they're viewing AT&T as a service provider that can really help them to extend the private cloud environment that they have today. So security is absolutely key.

As I mentioned earlier, networking capabilities are very attractive to the enterprise customers that we're talking to. They may think. "I've already invested in this global managed network that I have in multiple points around the world, and I'm simply adding another node on my network. Within minutes I can turn up workloads or I can store data in the cloud and only pay for the resources that I utilize, not only the compute and/or storage resources, but also the network resources."

Added efficiency

Previously many customers would have to buy a router and try to pull together a solution on their own. It can be costly and time consuming. There's a whole lot of efficiency that comes with having a service provider being able to manage your compute storage and networking capabilities end to end.

Global scale was also very critical to the customers who we've been talking to. The fact that AT&T has localized and distributed resources through a combination of our 38 data centers around the world, as well as central offices, makes it very attractive to do business with AT&T as a service provider.

Also, having that enterprise-grade customer experience is absolutely critical to the customers who do business with AT&T. When they think of our brand, they think of reliability. If there are service degradation or change management issues, they want to know that they've got a resource that is working on their behalf that has technical expertise and is a champion working proactively on their cloud environment.

Gardner: You mentioned that it's a natural extension for those who are using your network services to move towards cloud services. You also mentioned that VMware has somewhere in the order of 350,000 customers with private-cloud installations that can now seamlessly move to your public-cloud offering.

Tell me how that came about and why the VMware platform, as well as their installed base, has become critical for you?
We learned early on, in working with customers’ managed utility computing environments, that VMware was the virtualization tool of choice for many of our enterprise customers.

Costello: We've been doing business with VMware for a number of years. We also have a utility-computing platform called AT&T Synaptic Hosting. We learned early on, in working with customers’ managed utility computing environments, that VMware was the virtualization tool of choice for many of our enterprise customers.

As technologies evolved over time and cloud technologies have become more prevalent, it was absolutely paramount for us to pick a virtualization partner that was going to provide the global scale that we needed to serve our enterprise customers, and to be able to handle the large amount of volume that we receive, given the fact that we have been in the hosting business for over 15 years.

As a natural extension of our Synaptic Hosting relationship with VMware for many years, it only made sense that we joined the VMware vCloud Datacenter program. VMware is baked into our Synaptic Compute as a Service capability. And it really lets customers have a simplified hybrid cloud experience. In five simple steps, customers can move workloads from their private environment into AT&T's cloud environment.

Think that you are the IT manager and you are coming into start your workday. All of a sudden, you hit 85 percent utilization in your environment, but you want to very easily access additional resources from AT&T. You can use the same console that you use to perform your daily job for the data center that you run in-house.

In five clicks, you're viewing your in-house private-cloud resources that are VMware based and your AT&T virtual machines (VMs) running in AT&T's cloud, our Synaptic Compute as a Service capability. That all happens in minutes' time.

Fantastic discussions

I've been in the hosting business and application management business for many years and have seen lots of fantastic discussions with customers. The whole thing falls apart when you start talking about the complexities of interoperability and having to write scripts and code and not being able to accept tools that the clients have already made investments in.

The fact that we're part of the vCloud Datacenter program provides a lot of benefits for our clients, when you talk to customers about the benefit of running that cloud in AT&T's network. Some of the additional benefits are no incremental bandwidth needed at the data center and no investment in a managed-router solution.

We have patented AT&T technology that completely isolates traffic from other cloud traffic. The network and cloud elasticity work in tandem. So all of this happens on the fly, instantaneously. Then, all of the end-to-end class of service prioritization and QoS and DDOS protection capabilities that are inherent in our network are now surrounding the compute experience as well.

Gardner: We've certainly seen a lot of interest in this hybrid capability. I wonder if you could help me identify some of the use cases that this is being employed with now. I'm thinking that if I needed to expand my organization into another country or to another region of the world, given your 38 data centers and your global reach, I would be able to take advantage of this and bring services to that region from my private cloud pretty rapidly.

Is that one of the more popular use cases, or are there some others that are on the forefront of this hybrid uptake?
We see a lot of customers looking for a more efficient way to be able to have business continuity,  have the ability to fail over in the event of a disaster

Costello: I speak with a lot of customers who are looking to be able to virtually expand. They have data-center, systems, and application investments and they have global headquarters locations, but they don't want to have to stand up another data center and/or virtually expand and/or ship staff out to other location. So certainly one use case that's very popular with customers is, "I can expand my virtual data-center environment and use AT&T as a service provider to help me to do that."

Another use case that's very popular with our customers is disaster recovery. We see a lot of customers looking for a more efficient way to be able to have business continuity,  have the ability to fail over in the event of a disaster, and also get in and test their plans more frequently than they're doing today.

For many of the solutions that are in place today, clients are saying they are expensive and/or they're just not meeting their service-level agreements (SLAs) to their business unit. One of the solutions that we recently put in place for a client is that we put them in two of AT&T's geographically diverse data centers. We wrapped it with AT&T's private-networking capability and then we solutioned our AT&T Synaptic Compute as a Service and Storage as a Service.

The customer ended up with a better SLA and a very powerful return on investment (ROI) as well, because they're only paying for the cloud resources when the meter is running. They now have a stable environment so that they can get in and test their plans as often as they'd like to and they're only paying for a very small storage fee in the event that they actually need to invoke in the event of a disaster. So DR plans are very popular.

Another use case that’s very popular among our clients is short-term compute. We work with a lot of customers who have massive mathematical calculations and they do a lot of number crunching.

Data crunching

One customer that comes to mind is one that looks at the probability of natural disasters on large structures, such as bridges, tunnels, nuclear power plants. They came to AT&T, looked at our Synaptic Compute as a Service, and ultimately ran a very large number of VMs in a workload. Because of the large amount of data crunching they had to do, they ran it for two weeks straight on our platform. They finished the report. They were very pleased with the results, and the convenience factor was there.

They didn’t have to stand up an environment temporarily for themselves and now they use us anytime they sign a new client for those bursty type, short-term compute workloads.

Certainly test and development is one that I am seeing CIOs, directors of IT, and other functional managers as one of the most highly adopted use cases, in that it’s lower risk. Over the years, we've gone from, "Will I use the cloud?" to "What workloads are going to fit for me in the cloud?"

For those that are earlier on in their journey, using AT&Ts Synaptic Compute as a Service for their test and development environments certainly provides the performance, the global reach, and also the economics of pay per use. And if a client has private networking solutions from AT&T, they can fully integrate with our private networking solutions.

Finally, in the compute space, we're seeing a lot of customers start to hang virtual desktop solutions off of their compute environment. In the past, when I would ask clients about virtual desktop infrastructure (VDI), they'd say, "We're looking at it, but we're not sure. It hasn’t made the budget list." All of a sudden, it’s becoming one of the most highly requested use cases from customers, and AT&T has solutions to cover all those needs.
The fact that we have 38 data centers around the world, a global reach from a networking perspective, and all the foundational cloud capabilities makes a whole lot of sense.
 
Gardner: I'm particularly interested in the spiky applications, where your workload spikes up, but then there is no sense of keeping resources available for it when they're not in use. Do you think that this will extend to some of the big data and analytics crunching that we've heard about or is that hurdle of getting the data to the cloud still a major issue? And does your unique position as a network service provider help pave the way for more of these big-data, spiky types of uses?

Costello: I don’t think anyone is in a better position than AT&T to be able to help customers to manage their massive amounts of data, given the fact that a lot of this data has to reside on very strong networking solutions. The fact that we have 38 data centers around the world, a global reach from a networking perspective, and all the foundational cloud capabilities makes a whole lot of sense.

Speaking about this type of a bursty use case, we host some of the largest brand name retailers in the world. When you think about it, a lot of these retailers are preparing for the holidays, and their servers are going underutilized much of year. So how attractive is it to be able to look at AT&T, as a service provider, to provide them robust SLAs and a platform that they only have to pay for when they need to utilize it, versus sitting and going very underutilized much of the year?

We also host many online gaming customers. When you think about the gamers that are out there, there is a big land rush when the buzz occurs right before the launch of a new game. We work very proactively with those gaming customers to help them size their networking needs well in advance of a launch. Also we'll monitor it in real time to ensure that those gamers have a very positive experience when that launch does occur.

Gardner: I suppose one other area that’s top of mind for lots of folks is how to extend the enterprise out to the mobile tier, to those mobile devices. Again, this seems to be an area where having the network services expertise and reach comes to an advantage.

For an enterprise that wanted to extend more of their apps, perhaps the VDI experience, out to their mobile devices, be they smartphones or tablets, what offerings do you have that might help us grease the skid towards that kind of a value?

Mobility applications

Costello: AT&T has a very successful mobility applications business, and we have a couple of examples of how customers use our cloud services in conjunction with making their mobile applications more productive.

First and foremost, we have a set of experts and consultants who help customers to mobilize their applications. So there might be internal customer proprietary applications, and we can really help them move to an on-demand mobile environment.

Secondly, a couple of cloud examples of how customers will use our capabilities off the shelf. One is our AT&T Synaptic Storage as a Service capability. We find that many customers are looking for a secure place to collaborate on data sharing. They're looking to have a place to access their data and store their data to enable a worker on the go scenario, or to enable a field services applications or technicians.

Our AT&T Synaptic Storage as a Service capability gives the end-user that ability to store, distribute, share, and retrieve that data on the go using any web-enabled device. Another example is AT&T's Platform as a Service capability, a great foundational tool for users to go in and use any one of our pre-built application and then instantly mobilize that application.

We have a customer who recently used this, because they had a customer meeting and they didn't have a sophisticated way to get surveys out for their customers. They wanted to create a database on the fly and get instantaneous feedback.
We find that many customers are looking for a secure place to collaborate on data sharing.

So they went into AT&T's Platform as a Service -- and this is a marketing person mind you, not a technical user -- they entered the questions that they required of the customers. They sent the quick questionnaire out to the end-users, five simple questions. The clients answered the questions.

Ultimately, that customer had a very sophisticated database with all of that information that they could use for market sensing on how to improve their products, number one. But number two, it made sense to use it as a marketing tool to provide promotional information to those customers in the future.

Gardner: Very good. We've been talking about how global telecommunications giant AT&T has been creating and delivering advanced cloud services for the customers, and we have seen how they view the VMware-centric infrastructure approach to help provide virtual private clouds and other computing capabilities as integrated services at scale.

So thanks to our guest, Chris Costello, Assistant Vice President of AT&T Cloud Services, really appreciate your input.

Costello: Thank you.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again to our audience for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how telecom giant AT&T is leveraging its networking and cloud expertise to provide advanced cloud services. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Monday, January 28, 2013

The Open Group Keynoter Sees Big-Data Analytics Bolstering Quality, Manufacturing, Processes

Transcript of a BriefingsDirect podcast on how Ford Motor Company is harnessing multiple big data sources to improve products and operations.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hello, and welcome to a special BriefingsDirect thought leadership interview series coming to you in conjunction with The Open Group Conference on Jan. 28 in Newport Beach, California.

Gardner
I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host and moderator throughout these business transformation discussions. The conference will focus on "Big Data -- The Transformation We Need to Embrace Today."

We are here now with one of the main speakers at the conference, Michael Cavaretta, PhD, Technical Leader of Predictive Analytics for Ford Research and Advanced Engineering in Dearborn, Michigan.

We’ll see how Ford has exploited the strengths of big data analytics by directing them internally to improve business results. In doing so, they scour the metrics from the company’s best processes across myriad manufacturing efforts and through detailed outputs from in-use automobiles, all to improve and help transform their business. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Cavaretta has led multiple data-analytic projects at Ford to break down silos inside the company to best define Ford’s most fruitful data sets. Ford has successfully aggregated customer feedback, and extracted all the internal data to predict how best new features in technologies will improve their cars.

As a lead-in to his Open Group presentation, Michael and I will now explore how big data is fostering business transformation by allowing deeper insights into more types of data efficiently, and thereby improving processes, quality control, and customer satisfaction.

With that, please join me in welcoming Michael Cavaretta. Welcome to BriefingsDirect, Michael.

Michael Cavaretta: Thank you very much.

Gardner: Your upcoming presentation for The Open Group Conference is going to describe some of these new approaches to big data and how that offers some valuable insights into internal operations, and therefore making a better product. To start, what's different now in being able to get at this data and do this type of analysis from, say, five years ago?

Cavaretta: The biggest difference has to do with the cheap availability of storage and processing power, where a few years ago people were very much concentrated on filtering down the datasets that were being stored for long-term analysis. There has been a big sea change with the idea that we should just store as much as we can and take advantage of that storage to improve business processes.

Gardner: That sounds right on the money, but how did we get here? How do we get to the point where we could start using these benefits from a technology perspective, as you say, better storage, networks, being able to move big dataset, that sort of thing, to wrenching out benefits. What's the process behind the benefit?

Sea change in attitude

Cavaretta: The process behind the benefits has to do with a sea change in the attitude of organizations, particularly IT within large enterprises. There's this idea that you don't need to spend so much time figuring out what data you want to store and worry about the cost associated with it, and more about data as an asset. There is value in being able to store it, and being able to go back and extract different insights from it. This really comes from this really cheap storage, access to parallel processing machines, and great software.

Gardner: It seems to me that for a long time, the mindset was that data is simply the output from applications, with applications being primary and the data being almost an afterthought. It seems like we sort flipped that. The data now is perhaps as important, even more important, than the applications. Does that seem to hold true?

Cavaretta
Cavaretta: Most definitely, and we’ve had a number of interesting engagements where people have thought about the data that's being collected. When we talk to them about big data, storing everything at the lowest level of transactions, and what could be done with that, their eyes light up and they really begin to get it.

Gardner: I suppose earlier, when cost considerations and technical limitations were at work, we would just go for a tip-of-the-iceberg level. Now, as you say, we can get almost all the data. So, is this a matter of getting at more data, different types of data, bringing in unstructured data, all the above? How much you are really going after here?

Cavaretta: I like to talk to people about the possibility that big data provides and I always tell them that I have yet to have a circumstance where somebody is giving me too much data. You can pull in all this information and then answer a variety of questions, because you don't have to worry that something has been thrown out. You have everything.

You may have 100 questions, and each one of the questions uses a very small portion of the data. Those questions may use different portions of the data, a very small piece, but they're all different. If you go in thinking, "We’re going to answer the top 20 questions and we’re just going to hold data for that," that leaves so much on the table, and you don't get any value out of it.
The process behind the benefits has to do with a sea change in the attitude of organizations, particularly IT within large enterprises.

Gardner: I suppose too that we can think about small samples or small datasets and aggregate them or join them. We have new software capabilities to do that efficiently, so that we’re able to not just look for big honking, original datasets, but to aggregate, correlate, and look for a lifecycle level of data. Is that fair as well?

Cavaretta: Definitely. We're a big believer in mash-ups and we really believe that there is a lot of value in being able to take even datasets that are not specifically big-data sizes yet, and then not go deep, not get more detailed information, but expand the breadth. So it's being able to augment it with other internal datasets, bridging across different business areas as well as augmenting it with external datasets.

A lot of times you can take something that is maybe a few hundred thousand records or a few million records, and then by the time you’re joining it, and appending different pieces of information onto it, you can get the big dataset sizes.

Gardner: Just to be clear, you’re unique. The conventional wisdom for big data is to look at what your customers are doing, or just the external data. You’re really looking primarily at internal data, while also availing yourself of what external data might be appropriate. Maybe you could describe a little bit about your organization, what you do, and why this internal focus is so important for you.

Internal consultants

Cavaretta: I'm part of a larger department that is housed over in the research and advanced-engineering area at Ford Motor Company, and we’re about 30 people. We work as internal consultants, kind of like Capgemini or Ernst & Young, but only within Ford Motor Company. We’re responsible for going out and looking for different opportunities from the business perspective to bring advanced technologies. So, we’ve been focused on the area of statistical modeling and machine learning for I’d say about 15 years or so.

And in this time, we’ve had a number of engagements where we’ve talked with different business customers, and people have said, "We'd really like to do this." Then, we'd look at the datasets that they have, and say, "Wouldn’t it be great if we would have had this. So now we have to wait six months or a year."

These new technologies are really changing the game from that perspective. We can turn on the complete fire-hose, and then say that we don't have to worry about that anymore. Everything is coming in. We can record it all. We don't have to worry about if the data doesn’t support this analysis, because it's all there. That's really a big benefit of big-data technologies.

Gardner: If you've been doing this for 15 years, you must be demonstrating a return on investment (ROI) or a value proposition back to Ford. Has that value proposition been changing? Do you expect it to change? What might be your real value proposition two or three years from now?

Cavaretta: The real value proposition definitely is changing as things are being pushed down in the company to lower-level analysts who are really interested in looking at things from a data-driven perspective. From when I first came in to now, the biggest change has been when Alan Mulally came into the company, and really pushed the idea of data-driven decisions.
The real value proposition definitely is changing as things are being pushed down in the company to lower-level analysts.

Before, we were getting a lot of interest from people who are really very focused on the data that they had internally. After that, they had a lot of questions from their management and from upper level directors and vice-president saying, "We’ve got all these data assets. We should be getting more out of them." This strategic perspective has really changed a lot of what we’ve done in the last few years.

Gardner: As I listen to you Michael, it occurs to me that you are applying this data-driven mentality more deeply. As you pointed out earlier, you're also going after all the data, all the information, whether that’s internal or external.

In the case of an automobile company, you're looking at the factory, the dealers, what drivers are doing, what the devices within the automobile are telling you, factoring that back into design relatively quickly, and then repeating this process. Are we getting to the point where this sort of Holy Grail notion of a total feedback loop across the lifecycle of a major product like an automobile is really within our grasp? Are we getting there, or is this still kind of theoretical. Can we pull it altogether and make it a science?

Cavaretta: The theory is there. The question has more to do with the actual implementation and the practicality of it. We still are talking a lot of data where even with new advanced technologies and techniques that’s a lot of data to store, it’s a lot of data to analyze, there’s a lot of data to make sure that we can mash-up appropriately.

And, while I think the potential is there and I think the theory is there. There is also a work in being able to get the data from multiple sources. So everything which you can get back from the vehicle, fantastic. Now if you marry that up with internal data, is it survey data, is it manufacturing data, is it quality data? What are the things do you want to go after first? We can’t do everything all at the same time.

Highest value

Our perspective has been let’s make sure that we identify the highest value, the greatest ROI areas, and then begin to take some of the major datasets that we have and then push them and get more detail. Mash them up appropriately and really prove up the value for the technologists.

Gardner: Clearly, there's a lot more to come in terms of where we can take this, but I suppose it's useful to have a historic perspective and context as well. I was thinking about some of the early quality gurus like Deming and some of the movement towards quality like Six Sigma. Does this fall within that same lineage? Are we talking about a continuum here over that last 50 or 60 years, or is this something different?

Cavaretta: That’s a really interesting question. From the perspective of analyzing data, using data appropriately, I think there is a really good long history, and Ford has been a big follower of Deming and Six Sigma for a number of years now.

The difference though, is this idea that you don't have to worry so much upfront about getting the data. If you're doing this right, you have the data right there, and this has some great advantages. You’ll have to wait until you get enough history to look for somebody’s patterns. Then again, it also has some disadvantage, which is you’ve got so much data that it’s easy to find things that could be spurious correlations or models that don’t make any sense.

The piece that is required is good domain knowledge, in particular when you are talking about making changes in the manufacturing plant. It's very appropriate to look at things and be able to talk with people who have 20 years of experience to say, "This is what we found in the data. Does this match what your intuition is?" Then, take that extra step.
We do have to deal with working on pilot projects and working with our business customers to bring advanced analytics and big data technologies to bear against these problems.

Gardner: Tell me a little about sort a day in the life of your organization and your team to let us know what you do. How do you go about making more data available and then reaching some of these higher-level benefits?

Cavaretta: We're very much focused on interacting with the business. Most of all, we do have to deal with working on pilot projects and working with our business customers to bring advanced analytics and big data technologies to bear against these problems. So we work in kind of what we call push-and-pull model.

We go out and investigate technologies and say these are technologies that Ford should be interested in. Then, we look internally for business customers who would be interested in that. So, we're kind of pushing the technologies.

From the pull perspective, we’ve had so many successful engagements in such good contacts and good credibility within the organization that we've had people come to us and say, "We’ve got a problem. We know this has been in your domain. Give us some help. We’d love to be able to hear your opinions on this."

So we’ve pulled from the business side and then our job is to match up those two pieces. It's best when we will be looking at a particular technology and we have somebody come to us and we say, "Oh, this is a perfect match."

Big data

Those types of opportunities have been increasing in the last few years, and we've been very happy with the number of internal customers that have really been very excited about the areas of big data.

Gardner: Because this is The Open Group Conference and an audience that’s familiar with the IT side of things, I'm curious as to how this relates to software and software development. Of course there are so many more millions of lines of code in automobiles these days, software being more important than just about everything. Are you applying a lot of what you are doing to the software side of the house or are the agile and the feedback loops and the performance management issues a separate domain, or it’s your crossover here?

Cavaretta: There's some crossover. The biggest area that we've been focused on has been picking information, whether internal business processes or from the vehicle, and then being able to bring it back in to derive value. We have very good contacts in the Ford IT group, and they have been fantastic to work with in bringing interesting tools and technology to bear, and then looking at moving those into production and what’s the best way to be able to do that.

A fantastic development has been this idea that we’re using some of the more agile techniques in this space and Ford IT has been pushing this for a while. It’s been fantastic to see them work with us and be able to bring these techniques into this new domain. So we're pushing the envelope from two different directions.

Gardner: It sounds like you will be meeting up at some point with a complementary nature to your activities.

Cavaretta: Definitely.
There are huge opportunities within that, and there are also some interesting opportunities having to do with opening up some of these systems for third-party developers.

Gardner: Let’s move on to this notion of the "Internet of things," a very interesting concept that lot of people talk about. It seems relevant to what we've been discussing.

We have sensors in these cars, wireless transfer of data, more-and-more opportunity for location information to be brought to bear, where cars are, how they're driven, speed information, all sorts of metrics, maybe making those available through cloud providers that assimilate this data.

So let’s not go too deep, because this is a multi-hour discussion all on its own, but how is this notion of the Internet of things being brought to bear on your gathering of big data and applying it to the analytics in your organization?

Cavaretta: It is a huge area, and not only from the internal process perspective -- RFID tags within the manufacturing plans, as well as out on the plant floor, and then all of the information that’s being generated by the vehicle itself.

The Ford Energi generates about 25 gigabytes of data per hour. So you can imagine selling couple of million vehicles in the near future with that amount of data being generated. There are huge opportunities within that, and there are also some interesting opportunities having to do with opening up some of these systems for third-party developers. OpenXC is an initiative that we have going on to add at Research and Advanced Engineering.

Huge number of sensors

We have a lot of data coming from the vehicle. There’s huge number of sensors and processors that are being added to the vehicles. There's data being generated there, as well as communication between the vehicle and your cell phone and communication between vehicles.

There's a group over at Ann Arbor Michigan, the University of Michigan Transportation Research Institute (UMTRI), that’s investigating that, as well as communication between the vehicle and let’s say a home system. It lets the home know that you're on your way and it’s time to increase the temperature, if it’s winter outside, or cool it at the summer time.

The amount of data that’s been generated there is invaluable information and could be used for a lot of benefits, both from the corporate perspective, as well as just the very nature of the environment.

Gardner: Just to put a stake in the ground on this, how much data do cars typically generate? Do you have a sense of what now is the case, an average?

Cavaretta: The Energi, according to the latest information that I have, generates about 25 gigabytes per hour. Different vehicles are going to generate different amounts, depending on the number of sensors and processors on the vehicle. But the biggest key has to do with not necessarily where we are right now but where we will be in the near future.

With the amount of information that's being generated from the vehicles, a lot of it is just internal stuff. The question is how much information should be sent back for analysis and to find different patterns? That becomes really interesting as you look at external sensors, temperature, humidity. You can know when the windshield wipers go on, and then to be able to take that information, and mash that up with other external data sources too. It's a very interesting domain.
With the amount of information that's being generated from the vehicles, a lot of it is just internal stuff.

Gardner: So clearly, it's multiple gigabytes per hour per vehicle and probably going much higher.

Cavaretta: Easily.

Gardner: Let's move forward now for those folks who have been listening and are interested in bringing this to bear on their organizations and their vertical industries, from the perspective of skills, mindset, and culture. Are there standards, certification, or professional organizations that you’re working with in order to find the right people?

It's a big question. Let's look at what skills do you target for your group, and what ways you think that you can improve on that. Then, we’ll get into some of those larger issues about culture and mindset.

Cavaretta: The skills that we have in our department, in particular on our team, are in the area of computer science, statistics, and some good old-fashioned engineering domain knowledge. We’ve really gone about this from a training perspective. Aside from a few key hires, it's really been an internally developed group.

Targeted training

The biggest advantage that we have is that we can go out and be very targeted with the amount of training that we have. There are such big tools out there, especially in the open-source realm, that we can spin things up with relatively low cost and low risk, and do a number of experiments in the area. That's really the way that we push the technologies forward.

Gardner: Why The Open Group? Why is that a good forum for your message, and for your research here?

Cavaretta: The biggest reason is the focus on the enterprise, where there are a lot of advantages and a lot of business cases, looking at large enterprises and where there are a lot of systems, companies that can take a relatively small improvement, and it can make a large difference on the bottom-line.

Talking with The Open Group really gives me an opportunity to be able to bring people on board with the idea that you should be looking at a difference in mindset. It's not "Here’s a way that data is being generated, look, try and conceive of some questions that we can use, and we’ll store that too." Let's just take everything, we’ll worry about it later, and then we’ll find the value.

Gardner: I'm sure the viewers of your presentation on January 28 will be gathering a lot of great insights. A lot of the people that attend The Open Group conferences are enterprise architects. What do you think those enterprise architects should be taking away from this? Is there something about their mindset that should shift in recognizing the potential that you've been demonstrating?
Talking with The Open Group really gives me an opportunity to be able to bring people on board with the idea that you should be looking at a difference in mindset.

Cavaretta: It's important for them to be thinking about data as an asset, rather than as a cost. You even have to spend some money, and it may be a little bit unsafe without really solid ROI at the beginning. Then, move towards pulling that information in, and being able to store it in a way that allows not just the high-level data scientist to get access to and provide value, but people who are interested in the data overall. Those are very important pieces.

The last one is how do you take a big-data project, how do you take something where you’re not storing in the traditional business intelligence (BI) framework that an enterprise can develop, and then connect that to the BI systems and look at providing value to those mash-ups. Those are really important areas that still need some work.

Gardner: Another big constituency within The Open Group community are those business architects. Is there something about mindset and culture, getting back to that topic, that those business-level architects should consider? Do you really need to change the way you think about planning and resource allocation in a business setting, based on the fruits of things that you are doing with big data?

Cavaretta: I really think so. The digital asset that you have can be monetized to change the way the business works, and that could be done by creating new assets that then can be sold to customers, as well as improving the efficiencies of the business.

High quality data

This idea that everything is going to be very well-defined and there is a lot of work that’s being put into making sure that data has high quality, I think those things need to be changed somewhat. As you're pulling the data in, as you are thinking about long-term storage, it’s more the access to the information, rather than the problem in just storing it.

Gardner: Interesting that you brought up that notion that the data becomes a product itself and even a profit center perhaps.

Cavaretta: Exactly. There are many companies, especially large enterprises, that are looking at their data assets and wondering what can they do to monetize this, not only to just pay for the efficiency improvement but as a new revenue stream.

Gardner: We're almost out of time. For those organizations that want to get started on this, are there any 20/20 hindsights or Monday morning quarterback insights you can provide. How do you get started? Do you appoint a leader? Do you need a strategic roadmap, getting this culture or mindset shifted, pilot programs? How would you recommend that people might begin the process of getting into this?
Understand that it maybe going to be a little bit more costly and the ROI isn't going to be there at the beginning.

Cavaretta: We're definitely a huge believer in pilot projects and proof of concept, and we like to develop roadmaps by doing. So get out there. Understand that it's going to be messy. Understand that it maybe going to be a little bit more costly and the ROI isn't going to be there at the beginning.

But get your feet wet. Start doing some experiments, and then, as those experiments turn from just experimentation into really providing real business value, that’s the time to start looking at a more formal aspect and more formal IT processes. But you've just got to get going at this point.

Gardner: I would think that the competitive forces are out there. If you are in a competitive industry, and those that you compete against are doing this and you are not, that could spell some trouble.

Cavaretta: Definitely.

Gardner: We’ve been talking with Michael Cavaretta, PhD, Technical Leader of Predictive Analytics at Ford Research and Advanced Engineering in Dearborn, Michigan. Michael and I have been exploring how big data is fostering business transformation by allowing deeper insights into more types of data and all very efficiently. This is improving processes, updating quality control and adding to customer satisfaction.

Our conversation today comes as a lead-in to Michael’s upcoming plenary presentation. He is going to be talking on January 28 in Newport Beach California, as part of The Open Group Conference.

You will hear more from Michael and others, the global leaders on big data that are going to be gathering to talk about business transformation from big data at this conference. So a big thank you to Michael for joining us in this fascinating discussion. I really enjoyed it and I look forward to your presentation on the 28.

Cavaretta: Thank you very much.

Gardner: And I would encourage our listeners and readers to attend the conference or follow more of the threads in social media from the event. Again, it’s going to be happening from January 27 to January 30 in Newport Beach, California.

This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator throughout this thought leadership interview series. Thanks again for listening, and come back next time.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: The Open Group.

Transcript of a BriefingsDirect podcast on how Ford Motor Company is harnessing multiple big data sources to improve products and operations. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in: