Friday, October 28, 2011

Continuous Improvement And Flexibility Are Keys to Successful Data Center Transformation, Say HP Experts

Transcript of a sponsored podcast in conjunction with an HP video series on how companies can transform data centers productively and efficiently.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

For more information on The HUB -- HP's video series on data center transformation, go to www.hp.com/go/thehub.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on two major pillars of proper and successful data center transformation (DCT) projects. We’ll hear from a panel of HP experts on proven methods that have aided productive and cost-efficient projects to reshape and modernize enterprise data centers.

This is the first in a series of podcasts on DCT best practices and is presented in conjunction with a complementary video series. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here today, we’ll learn about the latest trends buttressing the need for DCT and then how to do it well and safely. Specifically, we’ll delve into why it's important to fully understand the current state of an organization’s IT landscape and data center composition in order to then properly chart a strategy for transformation.

Secondly, we'll explore how to avoid pitfalls by balancing long-term goals with short-term flexibility. The key is to know how to constantly evaluate based on metrics and to reassess execution plans as DCT projects unfold. This avoids being too rigidly aligned with long-term plans and roadmaps and potentially losing sight of how actual progress is being made -- or not.

With us now to explain why DCT makes sense and how to go about it with lower risk, we are joined by our panel: Helen Tang, Worldwide Data Center Transformation Lead for HP Enterprise Business; Mark Grindle, Master Business Consultant at HP, and Bruce Randall, Director of Product Marketing for Project and Portfolio Management at HP.

Welcome to you all.

My first question goes to Helen. What are the major trends driving the need for DCT? Also, why is now such a good time to embark on such projects?

Helen Tang: We all know that in this day and age, the business demands innovation, and IT is really important, a racing engine for any business. However, there are a lot of external constraints. The economy is not getting any better. Budgets are very, very tight. They are dealing with IT sprawl, aging infrastructure, and are just very much weighed down by this decade of old assets that they’ve inherited.

So a lot of companies today have been looking to transform, but getting started is not always very easy. So HP decided to launch this HUB project, which is designed to be a resource engine for IT to feature a virtual library of videos, showcasing the best of HP, but more importantly, ideas for how to address these challenges. We as a team, decided to tackle it with a series that’s aligned around some of the ways customers can approach addressing data centers, transforming them, and how to jump start their IT agility.

The five steps that we decided that as keys for the series would be the planning process, which is actually what we’re discussing in this podcast: data center consolidation, as well as standardization; virtualization; data center automation; and last but not least, of course, security.

IT superheroes


T
o make this video series more engaging, we hit on this idea of IT as superheroes, because we’ve all seen people, especially in this day and age, customers with the clean budget, whose IT team is really performing superhuman feats.

We thought we’d produce a series that's a bit more light-hearted than is usual for HP. So we added a superhero angle to the series. That’s how we hit upon the name of "IT Superhero Secrets: Five Steps to Jump Start Your IT Agility." Hopefully, this is going to be one of the little things that can contribute to this great process of data center modernizing right now, which is a key trend.

With us today are two of these experts that we’re going to feature in Episode 1. And to find these videos, you go to hp.com/go/thehub.

Gardner: Now we’re going to go to Mark Grindle. Mark, you've been doing this for quite some time and have learned a lot along the way. Tell us why having a solid understanding of where you are in the present puts you in a position to better execute on your plans for the future.

Mark Grindle: Thank you, Dana. There certainly are a lot of great reasons to start transformation now.

But as you said, the key to starting any kind of major initiative, whether it’s transformation, data center consolidation, or any of these great things like virtualization, technology refresh that will help you improve your environment, improve the service to your customers, and reduce costs, which is what this is all about, is to understand where you are today.

Most companies out there with the economic pressures and technology changes that have gone on have done a lot to go after the proverbial low-hanging fruit. But now it’s important to understand where you are today, so that you can build the right plan for maximizing value the fastest and in the best way.

When we talk about understanding where you are today, there are a few things that jump to mind. How many servers do I have? How much storage do I have? What are the operating system levels and the versions that I'm at? How many desktops do I have? People really think about that kind of physical inventory and they try to manage it. They try to understand it, sometimes more successfully and other times less successfully.

But there's a lot more to understanding where you are today. Understanding that physical inventory is critical to what you need to understand to go forward, and most people have a lot of tools out there already to do that. I should mention that those of you who don’t have tools that can get that physical inventory, it’s important that you do.

I've found so many times when I go into environments that they think they have a good understanding of what they have physically, and a lot of times they do, but rarely is that accurate. Manual processes just can't keep things as accurate or as current as you really need, when you start trying to baseline your environment so that you can track and measure your progress and value.

Thinking about applications


O
f course, beyond the physical portions of your inventory, you'd better start thinking about your applications. What are your applications. What language are they written in? Are those traditional or supportable commercial-off-the-shelf (COTS) type applications? Are they homegrown? That’s going to make a big difference in how you move forward.

And of course, what does your financial landscape look like? What’s going in the operating expense? What’s your capital expense? How is it allocated out, and by the way, is it consistently allocated out.

I've run into a lot of issues where a business unit in the United States has put certain items into an operating expense bucket. In another country or a sub-business unit or another business unit, they're tracking things differently in where they put network cost or where they put people cost or where they put services. So it's not only important to understand where your money is allocated, but what’s in those buckets, so that you can track the progress.

Then, you get into things like people. As you start looking at transformation, a big part of transformation is not just the cost savings that may come about through being able to redeploy your people, but it's also from making sure that you have the right skill set.

If you don’t really understand how many people you have today, what roles and what functions they’re performing, it's going to become really challenging to understand what kind of retraining, reeducation, or redeployment you’re going to do in the future as the needs and the requirements and the skills change.

You really need to understand where they are, so you can properly prepare them for that future space that they want to get into.



You transform, as you level out your application landscape, as you consolidate your databases, as you virtualize your servers, as you use more storage carrying all those great technology. That's going to make a big difference in how your team, your IT organization runs the operations. You really need to understand where they are, so you can properly prepare them for that future space that they want to get into.

So understanding where you are, understanding all those aspects of it are going be the only ways to understand what you have to do to get you in a state. As was mentioned earlier, you know the metrics of measurement to track your progress. Are you realizing the value, the saving, the benefit to your company that you initially used or justified transformation?

Gardner: Mark, I had a thought when you were talking. We’re not just going from physical to physical. A lot of DCT projects now are making that leap from largely physical to increasingly virtual. And that is across many different aspects of virtualization, not just server virtualization.

Is there a specific requirement to know your physical landscape better to make that leap successfully? Is there anything about moving toward a more virtualized future that adds an added emphasis to this need to have a really strong sense of your present state?

Grindle: You're absolutely right on with that. A lot of people have server counts -- I've got a thousand of these, a hundred of those, 50 of those types of things. But understanding the more detailed measurements around those, how much memory is being utilized by each server, how much CPU or processor is being utilized by each server, what do the I/Os look like, the network connectivity, are the kind of inventory items that are going to allow you to virtualize.

Higher virtualization ratios


I
talk to people and they say, "I've got a 5:1 or a 10:1 or a 15:1 virtualization ratio, meaning that you have 15 physical servers and then you’re able to talk to one. But if you really understand what your environment is today, how it runs, and the performance characteristics of your environment today, there are environments out there that are achieving much higher virtualization ratios -- 30:1, 40:1, 50:1. We’ve seen a couple that are in the 60 and 70:1.

Of course, that just says that initially they weren’t really using their assets as well as they could have been. But again, it comes back to understanding your baseline, which allows you to plan out what your end state is going to look like.

If you don’t have that data, if you don’t have that information, naturally you've got to be a little more conservative in your solutions, as you don’t want to negatively impact the business of the customers. If you understand a little bit better, you can achieve greater savings, greater benefits.

Remember, this is all about freeing up money that your business can use elsewhere to help your business grow, to provide better service to those customers, and to make IT more of a partner, rather than just a service purely for the business organization.

Gardner: So it sounds as if measuring your current state isn’t just measuring what you have, but measuring some of the components and services you have physically in order to be able to move meaningfully and efficiently to virtualization. It’s really a different way to measure things, isn’t it?

The more data you have, the better you’re going to be able to figure out your end-state solution, and the more benefit you’re going to achieve out of that end state.



Grindle: Absolutely. And it’s not a one-time event. To start out in the field -- whether transformation is right for you and what your transformations look like -- you can do that one-time inventory, that one-time collection of performance information. But it’s really going to be an ongoing process.

The more data you have, the better you’re going to be able to figure out your end-state solution, and the more benefit you’re going to achieve out of that end state. Plus, as I mentioned earlier, the environment changes, and you’ve got to constantly keep on top of it and track it.

You mentioned that a lot of people are going towards virtualization. That becomes an even bigger problem. At least when you’re standing up a physical server today, people complain about how long it takes in a lot of organizations, but there are a lot of checks and balances. You’ve got to order that physical hardware. You've got to install the hardware. You’ve got to justify it. It's got to be loaded up with software. It’s got to be connected to the network.

A virtualized environment can be stood up in minutes. So if you’re not tracking that on an ongoing basis, that's even worse.

Gardner: Let’s now go to Bruce Randall. Bruce, you’ve been looking at the need for being flexible in order to be successful, even as you've got a long-term roadmap ahead of you. Perhaps you could fill us in on why it’s important to evaluate along the way and not be even blinded by long-term goals, but keep balancing and reassessing along the way?

For more information on The HUB -- HP's video series on data center transformation, go to www.hp.com/go/thehub.

Account for changes

Bruce Randall: That goes along with what Mark was just saying about the infrastructure components, how these things are constantly changing, and there has to be a process to account for all of the changes that occur.

If you’re looking at a transformation process, it really is a process. It's not a one-time event that occurs over a length of time. Just like any other big program or project that you may be managing you have to plan not only at the beginning of that transformation, but also in the middle and even sometimes in the end of these big transformation projects.

If you think about these things that may change throughout that transformation, one is people. You have people that come. You have people that are leaving for whatever reason. You have people that are reassigned to other roles or take roles that they wanted to do outside of the transformation project. The company strategy may even change, and in fact, in this economy, probably will most likely within the course of the transformation project.

The money situation will most likely change. Maybe you’ve had a certain amount of budget when you started the transformation. You counted on that budget to be able to use it all, and then things change. Maybe it goes up. Maybe it goes down, but most likely, things do change. The infrastructure as Mark pointed to is constantly in flux.

So even though you might have gotten a good steady state of what the infrastructure looked like when you started your transformation project, that does change as well. And then there's the application portfolio. As we continue to run the business, we continue to add or enhance existing applications. The application portfolio changes and therefore the needs within the transformation.

Even though you might have gotten a good steady state of what the infrastructure looked like when you started your transformation project, that does change as well.



Because of all of these changes occurring around you, there's a need to plan not only for contingencies to occur at the beginning of the process, but also to continue the planning process and update it as things change fairly consistently. What I’ve found over time, Dana, with various customers, as they are doing these transformation projects and they try to plan, that planning stage is not just the beginning, not just at the middle, and not just the one point. In other words, it makes the planning process go a lot better and it becomes a lot easier.

In fact, I was speaking with a customer the other day. We went to a baseball game together. It was a customer event, and I was surprised to see this particular customer there, because I knew it was their yearly planning cycle that was going on. I asked them about that, and they talked about the way that they had used our tools. The HP tool sets that they used had allowed them to literally do planning all the time. So they could attend a baseball game instead of attend the planning fire-drill.

So it wasn’t a one-time event, and even if the business wanted a yearly planning view, they were able to produce that very, very easily, because they kept their current state and current plans up to date throughout the process.

Gardner: This reminds me that we've spoken in the past, Bruce, about software development. Successful software development for a lot of folks now involves agile principles. There are these things they call scrum meetings, where people get together and they're constantly reevaluating or adjusting, getting inputs from the team.

Having just a roadmap and then sticking to it turns out to not be just business as usual, but can actually be a path to disaster. Any thoughts about learning from how software is developed in terms of planning for a large project like a DCT.

A lot of similarities

Randall: Absolutely. There are a lot of similarities between the new agile methodologies and what I was just describing in terms of planning at the beginning, in the middle, and the end basically constantly. And when I say the word, plan, I know that evokes in some people a thought of a lot of work, a big thing. In reality, what I am talking about is much smaller than that.

If you’re doing it frequently, the planning needs to be a lot smaller. It's not a huge, involved process. It's very much like the agile methodology, where you’re consistently doing little pieces of work, finishing up sub-segments of the entire thing that you needed to do, as opposed to all of it describing it all, having all your requirements written out at the beginning, then waiting for it to get done sometime later.

You’re actually adapting and changing, as things occur. What's important in the agile methodology, as well as in this transformation, like the planning process I talked about for transformation, is that you still have to give management visibility into what's going on.

Having a planning process and even a tool set to help you manage that planning process will also give management the visibility that they need into the status of that transformation project. The planning process, also like the agile, the development methodology allows collaboration. As you’re going back to the plan, readdressing it, thinking about the changes that have occurred, you’re collaborating between various groups in silos to make sure that you’re still in tune and that you’re still doing things that you need to be doing to make things happen.

One other thing that often is forgotten within the agile development methodology, but it’s still very important, particularly for transformation, is the ability to track the cost of that transformation at any given point in time. Maybe that's because the budget needs to be increased or maybe it's because you're getting some executive mandate that the budget will be decreased, but at least knowing what your costs are, how much you’ve spent, is very, very important.

One other thing that often is forgotten within the agile development methodology, but it’s still very important, particularly for transformation, is the ability to track the cost of that transformation.



Gardner: When you say that, it reminds me of something 20 years or more ago in manufacturing, the whole quality revolution, thought leaders like Deming and the Japanese Kaizen concept of constantly measuring, constantly evaluating, not letting things slip. Is there some relationship here to what you’re doing in project management to what we saw during this “quality revolution” several decades ago?

Randall: Absolutely. You see some of the tenets of project management that are number one. You're tracking what’s going on. You’re measuring what’s going on at every point in time, not only with the cost and the time frames, but also with the people who are involved. Who's doing what? Are they fulfilling the task we’ve asked them to do, so on and so forth. This produces, in the end, just as Deming and others have described, a much higher quality transformation than if you were to just haphazardly try to fulfill the transformation, without having a project management tool in place, for example.

Gardner: So we’ve discussed some of these major pillars of good methodological structure and planning for DCT. How do you get started? Are there some resources available to get folks better acquainted with these to begin executing on how to put in place measurements, knowing their current state, creating a planning process that's flexible and dynamic before they even get into a full-fledged DCT? So what resources are available, and I'll open up this to the entire panel.

Randall: One thing that I would start with is to use multiple resources from HP and others to help customers in their transformation process to both plan out initially what that transformation is going to look like and then give you a set of tools to automate and manage that program and the changes that occur to it throughout time.

That planning is important, as we’ve talked about, because it occurs at multiple stages throughout the cycle. If you have an automated system in place, it certainly it makes it easier to track the plan and changes to that plan over time.

Gardner: And then you’ve created this video series. You also have a number of workshops. Are those happening fairly regularly at different locations around the globe? How are the workshops available to folks just to start in on this?

A lot of tools


Grindle: We do have a lot of tools as I was mentioning. One of the ones I want to highlight is the Data Center Transformation Experience workshop. And the reason I want to highlight because it really ties into what we’ve been talking about today. It’s an interactive session involving large panels, very minimal presentation and very minimal speaking by the HP facilitators.

We walk people through all the aspects of transformation and this is targeted at a strategic level. We’re looking at the CIOs, CTOs, and the executive decision makers to understand why HP did what they did as far as transformation goes.

We discuss what we’ve seen out in the industry, what the current trends are, and pull out of the conversation with these people where their companies are today. At the end of a workshop, and it's a full-day workshop, there are a lot of materials that are delivered out of it that not only documents the discussions throughout the day, but really provides a step or steps of how to proceed.

So it’s a prioritization. You have facility, for example, that might be in great shape, but your data warehouses are not. That’s an area that you should go after fast, because there's a lot of value in changing it, and it’s going to take you a long time. Or there's a quick hit in your organization and the way you manage your operation, because we cover all the aspects of program management, governance, management of change. That’s the organizational change for a lot of people. As for the technology, we can help them understand not only where they are, but what the initial strategy and plan should be.

You brought up a little bit earlier, Dana, some of the quality people like Deming, etc. We’ve got to remember that transformation is really a journey. There's a lot you can accomplish very rapidly. We always say that the faster you can achieve transformation, the faster you can realize value and the business can get back to leveraging that value, but transformation never ends. There's always more to do. So it's very analogous to the continuous improvement that comes out of some of the quality people that you mentioned earlier.

We always say that the faster you can achieve transformation, the faster you can realize value and the business can get back to leveraging that value, but transformation never ends.



Gardner: I'm curious about these workshops. Are they happening relatively frequently? Do they happen in different regions of the globe? Where can you go specifically to learn about where the one for you might be next?

Grindle: The workshops are scheduled with companies individually. So a good touch point would be with your HP account manager. He or she can work with you to schedule a workshop and understand that how it can be done. They're scheduled as needed.

We do hold hundreds of them around the world every year. It’s been a great workshop. People find it very successful, because it really helps them understand how to approach this and how to get the right momentum within their company to achieve transformation, and there's also a lot of materials on our website.

Gardner: You've been listening to a sponsored BriefingsDirect podcast discussion on two major pillars of proper and successful DCT projects, knowing your true state to start and then also being flexible on the path to long-term milestones and goals.

I’d like to thank our panel, Helen Tang, Worldwide Data Center Transformation Lead for HP Enterprise Business; Mark Grindle, Master Business Consultant at HP, and Bruce Randall, Director of Product Marketing for Project and Portfolio Management at HP. Thank you to you all.

This is Dana Gardner, Principal Analyst at Interarbor Solutions, and thanks again for our audience and their listening and attention, and do come back next time.

For more information on The HUB -- HP's video series on data center transformation, go to www.hp.com/go/thehub.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

Transcript of a sponsored podcast in conjunction with an HP video series on how companies can transform data centers productively and efficiently. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Monday, October 17, 2011

VMworld Case Study: City of Fairfield Uses Virtualization to More Efficiently Deliver Crucial City Services

Transcript of a BriefingsDirect podcast from the VMworld 2011 conference on how one city in California has gained cost and efficiency benefits from virtualization.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the VMworld 2011 Conference. We're here to explore the latest in cloud computing and virtualization infrastructure developments.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host throughout this series of VMware-sponsored BriefingsDirect discussions.

Our next VMware case study interview focuses on the City of Fairfield, California, and how the IT organization there has leveraged virtualization and cloud-delivered applications to provide new levels of service in an increasingly efficient manner.

We’ll see how Fairfield, a mid-sized city of 110,000 in Northern California, has taken the do-more-with-less adage to its fullest, beginning interestingly with core and mission-critical city services applications.

Please join me now in welcoming Eudora Sindicic, Senior IT Analyst Over Operations in Fairfield. Welcome to the show, Eudora.

Eudora Sindicic: Thank you very much.

Gardner: I'm really curious, why did you choose to move forward with virtualization on your core applications, mission-critical level applications, things like police support and fire department support? What made you so confident that those were the right apps to go with?

Sindicic: First of all, it’s always been challenging in disaster recovery and business continuity. Keeping those things in mind, our CAD/RMS systems for the police center and also our fire staffing system were high on the list for protecting. Those are Tier 1 applications that we want to be able to recover very quickly.

We thought the best way to do that was to virtualize them and set us up for future business continuity and true failover and disaster recovery.

So I put it to my CIO, and he okayed it. We went forward with VMware, because we saw they had the best, most robust, and mature applications to support us. Seeing that our back-end was SQL for those two systems, and seeing that we were just going to embark on a brand-new upgrading of our CAD/RMS system, this was a prime time to jump on the bandwagon and do it.

Also, with our back-end storage being NetApp, and NetApp having such an intimate relationship with VMware, we decided to go with VMware.

Gardner: And how has that worked out?

Snapshotting abilities

Sindicic: It’s been wonderful. We’ve had wonderful disaster recovery capabilities. We have snapshotting abilities. I'm snapshotting the primary database server and application server, which allows for snapshots up to three weeks in primary storage and six months on secondary storage, which is really nice, and it has served us well.

We already had a fire drill, where one report was accidentally deleted out of a database due to someone doing something -- and I'll leave it at that. Within 10 minutes, I was able to bring up the snapshot of the records management system of that database.

The user was able to go into the test database, retrieve his document, and then he was able to print it. I was able to export that document and then re-import it into the production system. So there was no downtime. It literally took 10 minutes, and everybody was happy.

Gardner: So you were able to accomplish your virtualization and also gain that disaster recovery and business continuity benefit, but you pointed out the time was of the essence. How long did it take you, and was that ahead of schedule, behind schedule? How that affects you in terms of timing?.

We went live with our CAD/RMS system on May 10, and it has been very robust and running beautifully ever since.



Sindicic: Back in early fiscal year 2010, I started doing all the research. I probably did a good nine months of research before even bringing this option to my CIO. Once I brought the option up, I worked with my vendors, VMware and NetApp, to obtain best pricing for the solution that I wanted.

I started implementation in October and completed the process in March. So it took some time. Then we went live with our CAD/RMS system on May 10, and it has been very robust and running beautifully ever since.

Gardner: Tell me about your apparatus, your IT operations, the number of servers, the level of virtualization that you’re using. Then, we’d like to hear about some of the additional apps you may be bringing on or have brought on.

Sindicic: I have our finance system, an Oracle-based system, which consists of an Oracle database server and Apache applications server, and another reporting server that runs on a different platform. Those will all be virtual OSs sitting in one of my two clusters.

For the police systems, I have a separate cluster just for police and fire. Then, in the regular day-to-day business, like finance and other applications that the city uses, I have a campus cluster to keep those things separated and to also relieve any downtime of maintenance. So everything doesn’t have to be affected if I'm moving virtual servers among systems and patching and doing updates.

Other applications

We’re also going to be virtualizing several other applications, such as a citizen complaint application called Coplogic. We're going to be putting that in as well into the PD cluster.

The version of VMware that we’re using is 4.1, we’re using ESXi server. On the PD cluster, I have two ESXi servers and on my campus, I have three. I'm using vSphere 4, and it’s been really wonderful having a good handle on that control.

Also, within my vSphere, vCenter server, I've installed a bunch of NetApp storage control solutions that allow me to have centralized control over one level snapshotting and replication. So I can control it all from there. Then vSphere gives me that beautiful centralized view of all my VMs and resources being consumed.

It’s been really wonderful to be able to have that level of view into my infrastructure, whereas when the things were distributed, I hadn’t had that view that I needed. I’d have to connect one by one to each one of my systems to get that level.

Also, there are some things that we’ve learned during this whole thing. I went from two VLANs to four VLANs. When looking at your traffic and the type of traffic that’s going to traverse the VLANs, you want segregate that out big time and you’ll see a huge increase in your performance.

We’re going to save in power. Power consumption, I'm projecting, will slowly go down over time as we add to our VM environment.



The other thing is making sure that you have the correct type of drives in your storage. I knew that right off the bat that IOPS was going to be an issue and then, of course, connectivity. We’re using Brocade switches to connect to the backend fiber channel drives for the server VMs, and for lower-end storage, we’re using iSCSI.

Gardner: I know you're only a few months into this in terms of being in full production, but in addition to getting some of these benefits around view and analytics into the operations, do you have any metrics of success in terms of lowering the total cost of doing this vis-à-vis your previous physical and distributed approach?

Sindicic: We are seeing cost benefits now. I don’t have all the metrics, but we’ve spun up six additional VMs. If you figure out the cost of the Dells, because we are a Dell shop, it would cost anywhere between $5,000 and $11,000 per server. On top of that, you're talking about the cost of the Microsoft Software Assurance for that operating system. That has saved a lot of money right there in some of the projects that we’re currently embarking on, and for the future.

We have several more systems that I know are going to be coming online and we're going to save in cost. We’re going to save in power. Power consumption, I'm projecting, will slowly go down over time as we add to our VM environment.

As it grows and it becomes more robust, and it will, I'm looking forward to a large cost savings over a 5- to 10-year period.

Better insight

Gardner: So we’ve seen that you've been able to maintain your mission-critical performance and requirements for these applications. You were able to get better insight into these operations. You were able to cut your costs. And now you’ve set yourself up for being able to extend that value into other applications.

Was there anything that surprised you that you didn’t expect, when you moved from the physical to the virtualized environment?

Sindicic: I was pleasantly surprised, as I said, with the depth of reporting that I could physically see, the graph, the actual metrics, as we were ongoing. As our CAD system came online into production, I could actually see utilization go up and to what level.

I was pleasantly surprised to be able to see to see when the backups would occur, how it would affect the system and the users that were on it. Because of that, we were able to time them so that would be the least-used hours and what those hours were. I could actually tell in the system when it was the least used.

It was real time and it was just really wonderful to be able to easily do that, without having to manually create all the different tracking ends that you have to do within Microsoft Monitor or anything like that. I could do that completely independently of the OS.

We're going to have some compliance issues, and it’s mostly around encryption and data control, which I really don’t foresee being a problem with VMware.



Gardner: So better control management and therefore efficiency, being able to decide when things should happen in a more efficient manner. Given the fact that you’re a public organization, have compliance or regulatory issues crept in, and has that been something that’s been beneficial?

Sindicic: Regulatory and compliance is going to creep in. I see that in the future with some of our applications as that rolls into a virtual environment. We're going to have some compliance issues, and it’s mostly around encryption and data control, which I really don’t foresee being a problem with VMware.

They also have a lot of hardening information that I am going to be using and utilizing to harden not only the OS, but you can also encrypt your VM. So I'm looking forward to doing that.

Gardner: Of course, you’re also in the public service business and you have to provide for your users who are those people that are then supporting the people in the community, the proactive public at large. So how has this gone?

Sindicic: Our biggest are our CAD and RMS systems. This is an application that is used in the laptops on all of the squad cars. And so far so good. Everybody seems to be really happy. The response of the application is significant. There haven’t been a lot of issues when it comes to connectivity and response times, all the way down to the unit. So it’s been really nice.

Gardner: That's the right effect I suppose, the right response. We're hearing a lot here at VMworld about desktop virtualization as well. I don’t know whether you’ve looked at that, but it seems like you've set yourself up for moving in that direction. Any thoughts about mobile or virtualized desktops as a future direction for you?

On the horizon

Sindicic: I see that most definitely on the horizon. Right now, the only thing that's hindering us is cost and storage. But as storage goes down, and as more robust technologies come out around storage, such as solid state, and as the price comes down on that, I foresee that something definitely coming into our environment.

Even here at the conference I'm taking a bunch of VDI and VMware View sessions, and I'm looking forward to hopefully starting a new project with virtualizing at the desktop level.

This will give us much more granular control over not only what’s on the user’s desktop, but patch management and malware and virus protection, instead of at the PC level doing it the host level, which would be wonderful. It would give us really great control and hopefully decreased cost. We’d be using a different product than probably what we’re using right now.

If you're actually using virus protection at the host level, you’re going to get a lot of bang for your buck and you won't have any impact on the PC-over-IP. That’s probably the way we we'll go, with PC-over-IP.

Right now, storage, VLANing all that has to happen, before we can even embark on something like that. So there's still a lot of research on my part going on, as well as finding a way to mitigate costs, maybe trade-in, something to gain something else. There are things that you can do to help make something like this happen.

I'm trying to implement infrastructure that grows smarter, so we don’t have to work harder, but work smarter, so that we can do a lot more with less.



Gardner: It certainly sounds like the more you’re able to learn and develop competency and implementation experience, the more you can then take advantage of some of the other efficiencies and it's almost as if there is a sort of a snowball effect here around productivity. Is that a fair characterization?

Sindicic: Most definitely. Number one, in city government, our IT infrastructure continues to grow as people are laid off and departments want to automate more and more processes, which is the right way to go. The IT staff remains the same, but the infrastructure, the data, and the support continues to grow. So I'm trying to implement infrastructure that grows smarter, so we don’t have to work harder, but work smarter, so that we can do a lot more with less.

VMware sure does allow that with centralized control in management, with being able to dynamically update virtual desktops, virtual servers, and the patch management and automation of that. You can take it to whatever level of automation you want or a little in between, so that you can do a little bit of check and balances with your own eyes, before the system goes off and does something itself.

Also, with the high availability and fault tolerance that VMware allows, it's been invaluable. If one of my systems goes down, my VMs automatically will be migrated over, which is a wonderful thing. We’re looking to implement as much virtualization as we can as budget will allow.

Gardner: So fewer of those late night calls? That’s important. It's really been impressive to hear what you’ve been able to do and you are a small-to-medium sized organization and you are on a tight budget. So congratulations on that.

Sindicic: Thank you very much.

Gardner: We’ve been talking about leveraging virtualization and cloud-delivered applications to provide higher levels of service in an increasingly efficient manner especially for core applications.

Join me please and thanking our guest, Eudora Sindicic, Senior IT Analyst Over Operations at Fairfield, California, a city of about 110,000 folks. Thanks so much, Eudora.

Sindicic: Thank you.

Gardner: Thanks to our audience for joining this special podcast coming to you from the 2011 VMworld Conference.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of VMware-sponsored BriefingsDirect discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast from the VMworld 2011 conference on how one city in California has gained cost and efficiency benefits from virtualization. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Wednesday, October 12, 2011

As Cloud and Mobile Trends Drive User Expectations Higher, Networks Must Now Deliver Applications Faster, Safer, Cheaper

Transcript of a sponsored podcast discussion on how networks services must support growing application and media delivery demands.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Learn more. Sponsor: Akamai Technologies.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today we present a sponsored podcast discussion on how the major IT trends of the day -- from mobile to cloud to app stores -- are changing the expectations we all have from our networks.

We hear about the post-PC era, but rarely does anyone talk about the post-LAN or even the post-WAN era. How are the networks of yesterday going to support the applications and media delivery requirements of tomorrow?

It’s increasingly clear that more users will be using more devices to access more types of content and services. They want coordination among those devices for that content. They want it done securely with privacy, and they want their IT departments to support all of their devices for all of their work applications and data too.

From the IT mangers' perspective, they want to be able to deliver all kinds of applications using all sorts of models, from smartphones to tablets to zero clients to web streaming to fat-client downloads and website delivery across multiple public and private networks with control and with ease.

This is all a very tall order, and networks will need to adjust rapidly or the latency and hassle of access and performance issues will get in the way of users, their new expectations, and their behaviors -- for both work and play.

We're here today with an executive from at Akamai Technologies to delve into the rapidly evolving trends and subsequently heightened expectations that we're all developing around our networks. We are going to look at how those networks might actually rise to the task.

Please join me in welcoming Neil Cohen, Vice President of Product Marketing at Akamai Technologies. Welcome to BriefingsDirect, Neil. [Disclosure: Akamai is a sponsor of BriefingsDirect podcasts.]

Neil Cohen: Hi, Dana. Happy to be here.

Gardner: So Neil, given these heightened expectations -- this always-on, hyper connectivity mode -- how are networks going to rise to this? Are they maybe even at the risk of becoming the weak link in how we progress?

Change is needed

Cohen: Nobody wants the network to be the weak link, but changes definitely need to happen. Look at what’s going on in the enterprise and the way applications are being deployed. It’s changing to where they're moving out to the cloud. Applications that used to reside in your own infrastructure are moving out to other infrastructure, and in some cases, you don’t have the ability to place any sort of technology to optimize the WAN out in the cloud.

Mobile device usage is exploding. Things like smartphones and tablets are all becoming intertwined with the way people want to access their applications. Obviously, when you start opening up more applications through access to the internet, you have a new level of security that you have to worry about when things move outside of your firewall that used to be within it.

Gardner: One of the things that's interesting to me is that there are so many different networks involved with an end-to-end services lifecycle now. We think about mobile and cloud, and we don’t have one administrator to go to, one throat to choke, as it were. How do people approach this problem when there are multiple networks, and how do you know where the weak link is, when there is a problem?

Cohen: The first step is to understand just what many networks actually mean, because even that has a lot of different dimensions to it. The fact that things are moving out to public clouds means that users are getting access, usually over the internet. We all know that the internet is very different than your private network. Nobody is going to give you a service-level agreement (SLA) on the internet.

Something like mobile is different, where you have mobile networks that have different attributes, different levels of over subscription and different bottlenecks that need to be solved. This really starts driving the need to not only 1) bring control over the internet itself, as well as the mobile networks.

There are a lot of different things that people are looking at to try to solve application delivery outside of the corporate network.



But also 2) the importance for performance analytics from a real end-user perspective. It becomes important to look at all the different choke points at which latency can occur and to be able to bring it all into a holistic view, so that you can troubleshoot and understand where your problems are.

Gardner: This is something we all grapple with. Occasionally, we’ll be using our smartphones or tablets and performance issues will kick in. I don’t have a clue where that weak link is on that spectrum of my device back to some data center somewhere. Is there some way that the network adapts? Is there a technology approach to this? We all want to attack it, but just briefly from a technological perspective, how can this end-to-end solution start to come together?

Cohen: There are a lot of different things that people are looking at to try to solve application delivery outside of the corporate network. Something we’ve been doing at Akamai for a long time is deploying our own optimization protocols into the internet that give you the control, the SLA, the types of quality of service that you normally associate with your private network.

And there are lots of optimization tricks that are being done for mobile devices, where you can optimize the network. You can optimize the web content and you can actually develop different formats and different content for mobile devices than for regular desktop devices. All of those are different ways to try to deal with the performance challenges off the traditional WAN.

Gardner: It's my sense that the IT folks inside enterprises are looking to get out of this business. There's been a tendency to bake more network services into their infrastructure, but I think as that edge of the enterprise moves outward, almost to infinity at this point, with so many different screens per user, that they probably want to outsource this as well. Do you sense if that’s the case and are the carriers stepping up to the plate and saying, "We’re going to take over more of this network performance issue?"

Cohen: I think they're looking at it and saying, "Look, I have a problem. My network is evolving. It's spanning in lots of different ways, whether it's on my private network or out on the internet or mobile devices," and they need to solve that problem. One way of solving it is to build hardware and do lots of different do-it-yourself approaches to try to solve that.

Unwieldy approach

I agree with you, Dana. That’s a very unwieldy approach. It requires a lot of dollars and arguably doesn’t solve the problem very well, which is why companies look for managed services and ways to outsource those types of problems, when things move off of their WAN.

But at the same time, even though they're outsourcing it, they still want control. It's important for an IT department to actually see what traffic and what applications are being accessed by the users, so that they understand the traffic and they can react to it.

Gardner: At the same time I'm seeing a rather impressive adoption pattern around virtualized desktop activities and there’s a variety of ways of doing this. We’ve seen solutions from folks like Citrix and Microsoft and we’re seeing streaming, zero-client, thin-client, and virtual-desktop activities, like infrastructure in the data center, a pure delivery of the full desktop and the applications as a service.

These are all different characterizations I suppose of a problem on the network. That is to say that there are different network issues, different payloads, and different protocols and technology. So how does that fit into this? When we look at latency, it's not just latency of one kind of delivery or technology or model. It's multiple at the same time.

Cohen: You’re correct. There are different unique challenges with the virtual desktop models, but it also ties into that same hyper-connected theme. In order to really unleash the potential of virtual desktops, you don’t only want to be able to access it on your corporate network, but you want to be able to get a local experience by taking that virtual desktop anywhere with you just like you do with a regular machine. You’re also seeing products being offered out in the market that allow you to extend virtual desktops onto your mobile tablets.

In order to really unleash the potential of virtual desktops, you don’t only want to be able to access it on your corporate network, but you want to be able to get a local experience.



You have the same kind of issues again. Not only do you have different protocols to optimize for virtual desktops, but you have to deal with the same challenges of delivering it across that entire ecosystem of devices, and networks. That’s an area that we’re investing heavily in as it relates to unlocking the potential of VDI. People will have universal access, to be able to take their desktops wherever they want to go.

Gardner: And is there some common thread to what we would think of in the past as acceleration services for things like websites, streaming, or downloads? Are we talking about an entirely new kind of infrastructure or is this some sort of a natural progression of what folks like Akamai have been doing for quite some time?

Cohen: It's a very logical extension of the technology we’ve built for more than a decade. If you look a decade ago, we had to solve the problem of delivering streaming video, real-time over the web, which is very sensitive to things like latency, packet loss, and jitter and that’s no different for virtual desktops. In order to give that local experience for virtual users, you have to solve the challenges of real-time communication back and forth between the client and the server.

Gardner: And these are fairly substantial issues. It seems to me that if you can solve these network issues, if you can outsource some of the performance concerns and develop a better set of security and privacy, I suppose backstops, then you can start to invest more in your data center consolidation efforts -- one datacenter for a global infrastructure perhaps.

You can start to leverage more outsource services like software as a service (SaaS) or cloud. You can transform your applications. Instead of being of an older platform or paradigm or model, you can start to go toward newer ones, perhaps start dabbling in things like HTML5.

If I were an architect in the enterprise, it seems to me that many of my long-term cost-performance improvement activities of major strategic initiatives are all hinging on solving this network problem.

So do you get that requirement, that request, from the CIO saying, "Listen, I'm betting my future on this network. What do I need to do? Who do I need to go to to make sure that that doesn’t become a real problem for me and makes my dollar spent perhaps more risky?"

Business transformation

Cohen: What I'm hearing is more of a business transformation example, where the business comes down and puts pressure on the network to be able to access applications anywhere, to be able to outsource, to be able to offshore, and to be able to modernize their applications. That’s really mandating a lot of the changes in the network itself.

The pressure is really coming from the business, which is, "How do I react more quickly to the changing needs of the business without having IT in a position where they say, 'I can't.' " The internet is the pervasive platform that allows you to get anywhere. What you need is the quality of service guarantees that should come with it.

Gardner: I suppose we’re seeing two things here. We’ve got the pressure from the business side, which is innovate, do better, and be agile. IT is also having to do more with less, which means they have to in many cases transform and re-engineer and re-architect.

So you have a lot of wind in your sails, right? There are a lot of people saying, I want to find somebody who can come to this network problem with some sort of a comprehensive solution, that one throat to choke. What do you tell them?

Cohen: I tell them to come to Akamai. If you can help transform a business and you can do it in a way that is operationally more efficient at a lower cost, you’ve got the winning combination.

Gardner: And this is also I suppose not just an Akamai play, but is really an ecosystem play, because we’re talking about working in coordination with cloud providers, with other technology suppliers and vendors. Tell me a little bit about how the ecosystem works and what it takes to create an end-to-end solution?

In order to solve this problem as it relates to access anywhere and pervasive connectivity on any device, you definitely need to strike a bunch of partnerships.



Cohen: In order to solve this problem as it relates to access anywhere and pervasive connectivity on any device, you definitely need to strike a bunch of partnerships. Given Akamai’s presence has been in the internet and the ISPs, the types of partnerships that are required are getting your footprints inside of the corporate network, to be able to traverse over what we call hybrid cloud networks -- corporate users inside of the private network that need to reach out the public clouds for example.

It requires partnerships with the cloud providers as well, so that people who are standing up new applications on infrastructure and platform as service environments have a seamless integrated experience. It also requires partnerships with other types of networks, like the mobile networks, as well as the service providers themselves.

Gardner: And looking at this from a traditional internet value proposition, tell us, for those who might not be that familiar with Akamai, what your legacy and your heritage is, and what some of the products are that you have now, so that we can start thinking about what we might look forward to in the future.

Cohen: Akamai has been in business for more than 12 years now. We help business innovators move forward with their Internet business models. A decade ago, that was really consumer driven. Most people were thinking about things like, "I've got this website. I'm doing some commerce. People want to watch video." That’s really changed in the last decade. Now, you see the internet transforming into enterprise use as well.

Akamai continues to offer the consumer-based services as it relates to improving websites and rich media on the web. But now we have a full suite of services that provide application acceleration over the internet. We allow you to reach users globally while consolidating your infrastructure and getting the same kind of benefits you realize with WAN optimization on your private network, but out over the internet.

Security services

And as those applications move outside of the firewall, we’ve got a suite of security services that address the new types of security threats you deal with when you’re out on the web.

Gardner: One of the other things that I hear in the marketplace is the need for data, more analysis, more understanding what’s really taking place. There's been sort of a black box, maybe several black boxes, inside of IT for the business leaders. They don’t always understand what’s going on in the data center, but I'm sure they don’t understand what’s going on in the network.

Is there an opportunity at this juncture, when we start to look for network services bridging across these networks, looking for value added services at that larger network level outside the enterprise, that we can actually bring a better view into what’s going on, on these networks, back to these business leaders and IT leaders? Is there an analysis, a business intelligence benefit from doing this as well?

Cohen: You’re absolutely right. What’s important is not only that you improve the delivery of an application, but that you have the appropriate insight in terms of how the application is performing and how people are using the application so that you can take action and react accordingly.

Just because something has moved out into the cloud or out on the Internet, it doesn’t mean that you can’t have the same kind of real-time personalized analytics that you expect on your private network. That’s an area we’ve invested in, both in our own technology investment, but also with some partnerships that provide real-time reporting and business intelligence in terms of our critical websites and applications.

Just because something has moved out into the cloud or out on the Internet, it doesn’t mean that you can’t have the same kind of real-time personalized analytics that you expect on your private network.



Gardner: Is there something about the type of applications that we should expect a change? We’ve had some paradigm shifts over the past 20 years. We had mainframe apps, and then client-server apps, and then we've had n-tier apps and Web apps and services orientation is coming, where it is more of a services delivery model.

But, is the mobile cloud, these mega trends that we’re seeing, are fundamentally redefining applications. Are we seeing a different type of what we consider application delivery requirement?

Cohen: A lot of it is very similar, which is the principle of the web. Websites are based on HTML and with HTML5, the web is getting richer, more immersive, and starting to approach that as the same kind of experience you get on your desktop.

What I expect to see is more adoption of standard web languages. It means that you need to use good semantic design principles, as it relates to the way you design your applications. But in terms of optimizing content and building for mobile devices and mobile specific sites, a lot of that is going to be using standard web languages that people are familiar with and that are just evolving and getting better.

Gardner: So maybe a way to rephrase that would be, not that the types of applications are changing, but is there a need to design and build these applications differently, in such a way that they are cloud-ready or hybrid-ready or mobile-ready?

Are there any thoughts that you have as someone who is really focused on the network of saying, "I wish I could to talk to these developers early on, when they’re setting up the requirements, so that we could build these apps for their ability to take advantage of this more heterogeneous cloud and/or multiple networks environment?"

Different spin

Cohen: There's slightly a different spin on that one, Dana, which is, can we go back to the developers and get them to build on a standard set of tools that allow them to deal with the different types of connected devices out in the market? If you build one code base based on HTML, for example, could you take that website that you've built and be able to render it differently in the cloud and allow it to adapt on the fly for something like an iPhone, an Android, a BlackBerry, a 7-inch tablet, or a 9-inch tablet?

If I were to go back to the developers, I’d ask, "Do you really need to build different websites or separate apps for all these different form factors, or is there a better way to build one common source, a code, and then adapt it using different techniques in the network, in the cloud that allow you to reuse that investment over and over again?"

Gardner: So part of the solution to the many screens problem isn’t more application interface designs, but perhaps a more common basis for the application and services, and let the network take care of those issues on a screen to screen basis. Is that closer?

Cohen: That’s exactly right. More and more of the intelligence is actually moving out to the cloud. We’ve already seen this on the video side. In the past people had to use lots of different formats and bit rates. Now what they’re doing is taking that stuff and saying, "Give me one high quality source." Then all of the adaptation capabilities that are going to be done in the network, in the cloud, just simplify that work from the customer.

I expect exactly the same thing to happen in the enterprise, where the enterprise is one common source of code and a lot of the adaptation capabilities are done, again, that intelligent function inside of the network.

It means that you need to use good semantic design principles, as it relates to the way you design your applications.



Gardner: I'm afraid we are about out of time, Neil. I really appreciate getting a better understanding of what some of the challenges are as we move into this “post-PC” era.

You've been listening to a sponsored podcast discussion on how the major IT trends of the day are changing the expectations we all have from our networks, and how those networks might rise to the occasion in helping us stay on track in terms of where we want things to go.

I want to thank our guest. We’ve been here with Neil Cohen, Vice President of Product Marketing at Akamai Technologies. Any closing thoughts Neil, on where people might consider the future networks to be and what they might look like?

Cohen: This is the hot topic. The WAN is becoming everything, but you really need to change your views as it relates to not just thinking about what happens inside of your corporate network, but with the movement of cloud, all of the connected devices, all of this quickly becoming the network.

Gardner: Very good. Thanks again. This is Dana Gardner, Principal Analyst at Interarbor Solutions. I also want to thank our audience for joining, and welcome them to come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Learn more. Sponsor: Akamai Technologies.

Transcript of a sponsored podcast discussion on how networks services must support growing application and media delivery demands. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Monday, October 10, 2011

Complex IT Security Risks Can Only Be Treated With Comprehensive Response, Not Point Products

Transcript of a BriefingsDirect podcast on the surge in security threats to enterprises and the approach companies need to take to thwart them.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Learn more. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the rapidly increasing threat that enterprises face from security breaches. In just the past year, the number of attacks are up, the costs associated with them are higher and more visible, and the risks of not securing systems and processes are therefore much greater. Some people have even called the rate of attacks a pandemic.

The path to reducing these risks, even as the threats escalate, is to confront security at the framework and strategic level, and to harness the point solutions approach into a managed and ongoing security enhancement lifecycle.

As part of the series of recent news announcements from HP, we're here to examine how such a framework process can unfold, from workshops that allow a frank assessment of an organization’s vulnerabilities, to tailored framework-level approaches that can transform a company based on its own specific needs. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here to describe how a fabric of technology, a framework of processes, and a lifecycle of preparedness can all work together to help organizations become more secure -- and stay secure -- is our guest. Please join me in welcoming Rebecca Lawson, Director of Worldwide Security Initiatives at HP. Welcome back, Rebecca.

Rebecca Lawson: Thank you. Nice to talk with you again.

Gardner: Rebecca, why now? Why has the security vulnerability issue come to a head?

Lawson: Open up the newspaper and you see another company getting hit almost every day. As an industry, we've hit a tipping point with so many different security related issues -- for example, cyber crime, hacktivism, nation-state attacks. When you couple that with the diversity of devices that we use, and the wide range of apps and data we access every day, you can see how these dynamics create a very porous environment for an enterprise.

So we are hearing from our customers that they want to step back and think more strategically about how they're going to handle security, not just for the short term, when threats are near and present, but also from a longer term point of view.

Gardner: What do you think are some of the trends that are supporting this vulnerability? I know you have some research that you've done. What are your findings? What's at work here that's making these hacktivists and these other nefarious parties successful?

For more detail on the the extent of security breaches, read the
Second Annual Cost of Cyber Crime Study.

Lawson: In HP’s recent research, we've found that thirty percent of the people know that they've had a security breach by an unauthorized internal access, and over 20 percent have experienced an external breach. So breaches happen both internally and externally, and they happen for different reasons. Sometimes a breach is caused by a disgruntled customer or employee. Sometimes, there is a political motive. Sometimes, it's just an honest error ... Maybe they grab some paper off a printer that has some proprietary information, and then it gets into the wrong hands.

There are so many different points at which security incidents can occur; the real trick is getting your arms around all of them and focusing your attention on those that are most likely to cause reputation damage or financial damage or operational damage.

We also noticed in our research that the number of attacks, particularly on web applications, is just skyrocketing. One of the key areas of focus for HP is helping our customers understand why that’s happening, and what they can do about it.

Gardner: It also seems to me that, in the past, a lot of organizations could put up a walled garden, and say, "We're not going to do a lot of web stuff. We're not going to do mobile. We're going to keep our networks under our control." But nowadays that’s really just not possible.

If you're not doing mobile, not looking seriously at cloud, not making your workers able to access your assets regardless of where they are, you're really at a disadvantage competitively. So it seems to me that this is not an option, and that the old defensive posture just doesn’t work anymore.

Lawson: That is exactly right. In the good old days, we did have a walled garden, and it was easy for IT or the security office to just say “no” to newfangled approaches to accessing the web or building web apps. Of course, today they can still say no, but IT and security offices realize that they can't thwart the technology-related innovation that helps drive growth.

Our customers are keenly aware that their information assets are the most important assets now. That’s where the focus is, because that’s where the value is. The problem is that all the data and information moves around so freely now. You can send data in the blink of an eye to China and back, thru multiple application, where it’s used in different contexts. The context can change so rapidly that you have to really think differently about what it is you're protecting and how you're going to go about protecting it. So it's a different game now.

Gardner: And as we confront this "new game," it also appears that our former organizational approach is wanting. If we've had a variety of different security approaches under the authority of different people -- not really coordinated, not talking to each other, not knowing what the right hand and left hand are doing -- that’s become a problem.

So how do we now elevate this to a strategic level, getting a framework, getting a comprehensive plan? It sounds like that’s what a lot of the news you've been making these days is involved with.

No silver bullet

Lawson: You're exactly right. Our customers are realizing that there is no one silver bullet. You have to think across functional areas, lines of business, and silos.

Job number one is to bring the right people together and to assess the situation. The people are going to be from all over the organization -- IT, security and risk, AppDev, legal, accounting, supply chain -- to really assess the situation. Everyone should be not only aware of where vulnerabilities might be, or where the most costly vulnerabilities might be, but to look ahead and say, "Here is how our enterprise is innovating with technology -- Let's make sure we build security into them from the get-go."

There are two takeaways from this. One is that HP has a structured methodical framework approach to helping our customers get the people on the same page, getting the processes from top-down really well-structured so that everyone is aware of how different security processes work and how they benefit the organizations so that they can innovate.

One of the other elements is that every enterprise has to deal with a lot of short-term fixes.



One of the other elements is that every enterprise has to deal with a lot of short-term fixes. For example, a new vulnerability gets discovered in an application, and you've got to go quickly plug it, because it's relevant to your supply chain or some other critical process. That’s going to continue to go on.

But also, long term thinking, about building security in from the get-go; this is where companies can start to turn the corner. I'll go back again to web apps, building security into the very requirement and making sure all the way through the architecture design, testing, production, all the way through that you are constantly testing for security.

Gardner: So as you move toward more of a strategic approach to security, trying to pull together all these different threads into a fabric, you've identified four basic positions: assessment, optimization, management, and transformation. I'm curious, what is it about what you are coming out with in terms of process and technology that helps companies work toward that? What are the high-level building blocks?

Read more on HP's security framework
Rethinking Your Enterprise Security:
Critical Priorities to Consider

Lawson: The framework that I just mentioned is our way of looking at what you have to do across securing data, managing suppliers, ensuring physical assets, or security, but our approach to executing on that framework is a four-point approach.

We help our customers first assess the situation, which is really important just to have all eyes on what's currently happening and where your current vulnerabilities may lie. Then, we help them to transform their security practices from where they are today to where they need to be.

Then, we have technologies and services to help them manage that on an ongoing basis, so that you can get more and more of your security controls automated. And then, we help them optimize that, because security just doesn't stand still. So we have tools and services that help our customers keep their eye on the right ball, as all of the new threats evolve or new compliance requirements come down the pike.

Gardner: I've also heard that you're providing better visibility, but at a more comprehensive level, something called the HP Secure Boardroom. Maybe you could help us better understand what that means and why that's important as part of this organizational shift?

Get more information on the executive dashboard:
Introducing the HP Secure Boardroom.

Lawson: The Secure Boardroom combines dashboard technology with a good dose of intellectual property we have developed that helps us generate the APIs into different data sources within an organization.

The result is that a CISO can look at a dashboard and instantly see what's going on all across the organization. What are the threats that are happening? What's the rate of incidents? What's going on across your planning spectrum?

To have the visibility into disparate systems is step one. We've codified this over the several years that we've been working on this into a system that now any enterprise can use to pull together a consistent C-level view, so that you have the right kind of transparency.

Half the battle is just seeing what's going on every day in a consistent manner, so that you are focused on the right issues, while discovering where you might need better visibility or where you might need to change process. The Secure Boardroom helps you to continually be focused on the right processes, the right elements, and the right information to better protect financial, operational, and reputation-related assets.

Gardner: Rebecca, this reminds me of some of the strength that HP has been developing over the years in systems management. I've been observing and following HP for over 20 years and I can remember doing briefings with HP on OpenView when it was a new product and a new approach to management.

When you think about vulnerabilities, threats, and attacks, the first thing you have to do is have the right visibility. We have technology in our security organization that helps us see and find the vulnerabilities really quickly.



Is there continuity here between the expertise and the depth and breadth that HP has developed in how to manage systems and now bringing that into how to make them secure and to provide automation and policies that can ensure security over time?

Lawson: Yes. And I cannot believe it's been 20 years. That's a great point. Because we've been in the systems management and business service management business for so long, I would elevate it up to the level of the business service management.

We already have a head start with our customers, because they can already see the forest for the trees with regard to any one particular service. Let's just say it's a service in the supply chain, and that service might comprise network elements and systems and software and applications and all kinds of data going through it. We're able to tie the management of that through traditional management tools, like what we had with OpenView and what we have with our business service management to the view of security.

When you think about vulnerabilities, threats, and attacks, the first thing you have to do is have the right visibility. We have technology in our security organization that helps us see and find the vulnerabilities really quickly.

Let's say there's an incident and our security technology identifies it as being suspect, maybe it's just a certain type of database entry that's suspect, because we can associate it with a known bad IP address, we can do that because we have a correlation engine that is looking at factors like bad reputations, DNS entries, and log files, pulling all this together, and mapping that to incidents.

So we can say that this one is really suspect, let's do something about that. It can then initiate an incident record, which then goes to change management, and goes all the way through to remediation. You say, "You know what, we're going to block that guy from now on." Or maybe something happened when you're doing patch management and a mistake happens, or there's some vulnerability that happened during the time frame that somebody was doing the patch.

Integration with operations

Because we have our security technology tied with IT operations, there is an integration between them. When the security technology detects something, they can automatically issue an alert that is picked up from our incident management system, which might then invoke our change management system, which might then invoke a prescribed operations change, and we can do that through HP Operations Orchestration.

For example, if a certain event occurs, we can automate the whole process to remediate that occurrence. In the case of patch management -- something went wrong. It might have been a human error. It doesn't matter -- what happens is that we've already anticipated a certain type of attack or mistake. That's a very long way of saying that we've tied our security technology to our IT operations, and by the way, also to our applications management.

It really is a triad -- security, applications, operations. At HP, we’re making them work together. And because we have such a focus now on data correlation, on Big Data, we're able to bring in all the various sources of data and turn that into actionable information, and then execute it through our automation engine.

Gardner: So the concept here, as with management, is that to find issues around reliability performance requires that über overview approach, and having access to all of these data points and then being able to manage change and portfolio management as well, and then of course the lifecycle of the applications comes into play.

But it strikes me, when I listen to you, that this isn't really a security technology story, it's really a story about a comprehensive ability to manage your IT operations. Therefore, this is not just a bolt-on, something that one or two companies add as a new product to the market. So what differentiates HP? It doesn't strike me that there are not many companies that can pull this all together?

We can't say no to technology, because that's the engine of what makes an enterprise grow and be competitive.



Lawson: That's very true. As I mentioned, there is no one silver bullet. It's a matter of how you pull all the little pieces together and make sense of them.

Every organization has to innovate. We know that technology accelerates innovation. We can't say no to technology, because that's the engine of what makes an enterprise grow and be competitive. Everything new that's created has security already built-in, so that there is no delay down the road, and this is particularly germane in the applications area, as we were mentioning earlier.

Gardner: Rebecca, I've also heard you mention something called the "fabric of technology," and I know you've got a lot of announcements from ArcSight, Fortify and TippingPoint brands within HP. People can look to the news reports and get more information in detail on those particular announcements. But how does the technology news and that concept of a fabric come into play here?

Lawson: Well, let me use an example. Let's say one of your business services is a composite service and you may be using some outside cloud services and some internal services in your SAP system. Because all of the business processes tend to be built on composite technology-based services, you have to have the right fabric of security provision that’s guarding that process so nothing happens in all the various places where it could happen.

For example, we have a technology that lets you scan software and look for vulnerabilities, both dynamic and static testing. We have ways of finding vulnerabilities in third-party applications. We do that through our research organization which is called DVLabs. DV stands for Digital Vaccine. We pull data in from them every day as to new vulnerabilities and we make that available to the other technologies so we can blend that into the picture.

Focused technology

The right kind of security fabric has to be composed of different technologies that are very focused on certain areas. For example, technologies like our intrusion protection technology, which does the packet inspection and can identify bad IP addresses. They can identify that there are certain vulnerabilities associated with the transaction, and they can stop a lot of traffic right at the gate before it gets in.

The reason we can do that so well is because we've already weaved in information from our applications group, information from our researchers out there in the market. So we've been able to pull these together and make more value out of them working as one.

Another example is all of this information then can weave into our security, intelligence, and risk management platform, which is underpinned by our ArcSight technology, Fortify technology, and Tipping Point as well. We can do rigorous analysis and correlation of what would otherwise be very disparate data points.

So not only can we stop things right at the gate with our filters on our IPS, but we can do the analysis that says there's a pattern that's not looking good. Luckily we have built and bought technology that all works together in concert, and that lets you focus on the most critical aspects of keeping your enterprise running.

Gardner: We've talked about assessment. We've talked about change of processes and strategic and framework level activities. We've talked about the boardroom view and how this follows some of the concepts of doing good IT systems management, but we are also of course in the cloud era.

A lot of people think that when the words cloud and security are next to each other, bad things happen, but in fact, that’s not always the case.



I'm curious as to how organizations that may not want to actually do more of this over time themselves, but look for others who are in fact core competency focused on security start doing it. Is there a path toward security as a service or some sort of a managed service hybrid model that we're now going to be moving to as well?

Lawson: Absolutely. A lot of people think that when the words cloud and security are next to each other, bad things happen, but in fact, that’s not always the case.

Once an enterprise has the right plan and strategy in place, they start to prioritize what parts of their security are best suited in-house, with your own expertise, or what parts of the security picture can you or should you hand off to another party. In fact, one of our announcements this week is that we have a service for endpoint threat management.

If you're not centrally managing your endpoint devices, a lot of incidents can happen and slip through the cracks -- everything from an employee just losing a phone to an employee downloading an application that may have vulnerabilities.

So managing your endpoints devices in general, as well as the security associated with the endpoints, make a lot of sense. And it’s a discrete area where you might consider handing the job to a managed services provider, who has more expertise as well as better economic incentives.

Application testing

Another great example of using a cloud service for security is application testing. We are finding that a lot of the web apps out in the market aren't necessarily developed by application developers who understand that there's a whole lifecycle approach involved.

In fact, I've been hearing interesting statistics about the number of web apps that are written by people formerly known as webmasters. These folks may be great at designing apps, but if you're not following a full application lifecycle management practice, which invokes security as one of the base principles of designing an app, then you're going to have problems.

What we found is that this explosion of web apps has not been followed closely enough by testing. Our customers are starting to realize this and now they're asking for HP to help, because in fact there are a lot of app vulnerabilities that can be very easily avoided. Maybe not all of them, but a lot of them, and we can help customers do that.

So testing as a service as a cloud service or as a hosted or managed service is a good idea, because you can do it immediately. You don't incur the time and money to spin up a testing of center of excellence – you can use the one that HP makes available through our SaaS model.

Gardner: As part of your recent announcements, moving more toward a managed services provider role, is something that you are working on yourselves at HP and you are also enabling your ecosystem partners. Perhaps we can wrap up with a little bit more detail about what you are going to be offering as services in addition to what you are offering as professional services and products.

One of the great things about many of the technologies that we've purchased and built in the last few years is that we're able to use them in our managed services offerings.



Lawson: One of the great things about many of the technologies that we've purchased and built in the last few years is that we're able to use them in our managed services offerings.

I'll give you an example. Our ArcSight product for Security Information and Event Management is now offered as a service. That's a service that really gets better the more expertise you have and the more focused you are on that type of event correlation and analysis. For a lot of companies they just don't want to invest in developing that expertise. So they can use that as a service.

We have other offerings, across testing, network security, endpoint security, that are all offered as a service. So we have a broad spectrum of delivery model choices for our customers. We think that’s the way to go, because we know that most enterprises want a strategic partner in security. They want a trusted partner, but they're probably not going to get all of their security from one vendor of course, because they're already invested.

We like to come in and look first at establishing the right strategy, putting together the right roadmap, making sure it's focused on helping our customer innovate for the future, as well as putting some stopgap measures in so that you can thwart the cyber threats that are near and present danger. And then, we give them the choice to say what's best for their company, given their industry, given the compliance requirements, given time to market, and given their financial posture?

There are certain areas where you're going to want to do things yourself, certain areas where you are going to want to outsource to a managed service. And there are certain technologies already at play that are probably just great in a point solution context, but they need to be integrated.

Integrative approach

M
ost of our customers have already lots of good things going on, but they just don't all come together. That's really the bottom line here. It has to be an integrative approach. It has to be a comprehensive approach. And the reason is that the bad guys are so successful causing havoc is that they know that all of this is disconnected. They know that security technologies tend to be fragmented and they're going to take advantage of that.

Gardner: You've had a lot of news come out, and we've talked about an awful lot today. Is there a resource that you could point to that folks can go and perhaps get a more detailed, maybe in one spot, a security wellspring perhaps? What would you suggest?

Lawson: I'd definitely suggest going to hp.com/go/enterprisesecurity. In particular, there is a report that you can download and read today called the "HP DVLabs’ Cyber Security Risks Report." It’s a report that we generate twice a year and it has got some really startling information in it. And it’s all based on, not theoretical stuff, but things that we see, and we have aggregated data from different parts of the industry, as well as data from our customers that show the rate of attacks and where the vulnerabilities are typically located. It’s a real eye opener.

It’s a little startling, when you start to look at some of the facts about the costs associated with application breaches or the nature of complex persistent attacks.



So I would just suggest that you search for the DVLabs’ Cyber Security Risks Report and read it, and then pass it on to other people in your company, so that they can become aware of what the situation really is. It’s a little startling, when you start to look at some of the facts about the costs associated with application breaches or the nature of complex persistent attacks. So awareness is the right place to start.

Gardner: Very good. We've been listening to a sponsored podcast discussion on how to confront security at the framework and strategic level and how to harness the point solutions approach into a managed and ongoing security enhancement lifecycle benefit.

We have been joined in our discussion today by Rebecca Lawson, Director of Worldwide Security Initiatives at HP. Thanks so much, Rebecca.

Lawson: Thank you so much, Dana. It’s great to talk to you.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Learn more. Sponsor: HP.

Transcript of a BriefingsDirect podcast on the surge in security threats to enterprises and the approach companies need to take to thwart them. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in: