Tuesday, November 18, 2008

Identity and Access Management Key to Security Best Practices in Changing Business Landscape

Transcript of a BriefingsDirect podcast on the role of identity and IT access management in the dynamic enterprise.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion on the role of identity and access management (IAM), and its impact on security and risk reduction.

We live in an age when any of us, on a typical day, has access to hundreds of applications, and perhaps we have improper access to some of those applications or data inside of our companies. We may not even know it. What's worse, our IT department might not know it.

Managing who gets access to which resources for how long -- and under what circumstances -- has become a huge and thorny problem. The stakes are too high. Improper and overextended access to sensitive data and powerful applications can cause significant risk and even damage or loss.

Hewlett-Packard (HP) and Oracle have been teaming up to improve the solutions around IAM. Through products and services, a series of best practices and preventative measures has been established. To learn more about managing risk around IAM, we will be talking with executives from both HP and Oracle.

Here with us today, we are joined by Dan Rueckert. He is the worldwide practice director for security and risk management for HP’s Consulting and Integration (C&I) group. Welcome, Dan.

Dan Rueckert: Thanks, Dana, glad to be here.

Gardner: We are also joined by Archie Reed, distinguished technologist in HP’s security office in the Enterprise Storage and Server Group. Welcome, Archie.

Archie Reed: Hi, Dana.

Gardner: And we’re also joined by Mark Tice, vice president of identity management at Oracle. Thanks for joining, Mark.

Mark Tice: Hi, Dana, thank you very much.

Gardner: Now, let’s look at this historically -- and I guess I’ll take this to Dan Rueckert. How have things changed around IAM and general risk and security around access to assets and resources in the past couple of years? Is this another instance of data explosion, or are there other implications for organizations to consider?

Rueckert: Thanks, Dana. When we look at IAM, we are really saying that the speed of business is increasing, and with that the rate of change of organizations to support their business. You see it everyday in mergers and acquisitions that are going on right now. As a result of that, you see consolidation.

All these different factors are going on. We are also driving regulations and compliance to those regulations on an ongoing basis. When you start to go with these regulations, the ability to have people access their data, or have access to the tools, applications, and data that they need at the right time is key.

It’s the speed, and it’s continuing to go on as we see the convergence of both the traditional IT systems or applications, and then the merger with operational technology, as we know it, from real-time systems, or near real-time systems.

Gardner: Archie Reed, how do you see this impacting the business climate? How important is this for companies in terms of their exposure?

Reed: This is a critical area that folks have to look at. There's a difference that we’re seeing when we go out and talk to customers, and they’re saying that security is a big concern. It’s a big issue for them. It’s not simple and it’s often not cost-effective, or the return on investment (ROI) is difficult to define.

When you talk about security being a big concern, there is a disconnect between it being a priority, or a high priority, for a lot of companies. It’s dependent on the specific company to have security high on the priority list. It’s often placed low because of that ROI challenge.

The reality in the market is that many things impact that security posture, internally, every time a new system is installed, any product or service defined, or even when a new employee joins. Externally, we're impacted by new regulations, new partnerships, new business ventures, whatever form they may take. All those things can impact our ability, or our security posture.

Security is much like business. That is, it’s impacted by many, many factors, and the problem today is trying to manage that situation. When we get down to tools and requirements around such things as identity management, we are dealing with people who have access to systems. The criticality there is that there have been so many public breaches that we have become aware of recently that security again is a high concern.

People are not necessarily taking it into their priority list as being critical, but tools such as identity management and general system management can help you to mitigate the risks. If we start to talk about risk analysis, and ROI being one and the same discussions, then we may be able to help companies move forward and get to the right position.

Gardner: Clearly, this is not something that product alone can tackle, nor services alone either. So, it's certainly makes sense that Oracle and HP are teaming up with a solutions approach to this. What is the overall solution approach, is this 60 percent behavior, 40 percent product? Dan, give us a sense of how this gets solved, when it comes to products and/or services?

Rueckert: Dana, it's definitely people, process, and technology coming together. In some cases, it’s situational, as far as working with customers that have legacy systems, or more modern systems. That starts to dictate how much of that process, how much of that consulting they need, or how much technology?

When we talk about the HP-Oracle relationship, it’s about having that strong foundation as far as IAM, but also the ability to open up to the other areas that it's tied into, in this case enterprise architecture, the middleware pieces that we want for databases, and other applications that they have.

You start to put that thread with IAM, combined with an infrastructure and that opens this up as a whole, which is key. And, enablement, as far as depending on the size and complexity or localization or globalization, tends to play into those attributes, as far as people process and technology.

Gardner: And this also relates to the Secure Advantage Program, as well as the HP Adaptive Infrastructure, can you paint a picture for us as to how those relate? I guess we can go to Archie Reed on this.

Reed: The first thing would be to understand what Secure Advantage is. Fundamentally it’s an evolution of HP’s Security Strategy. One thing folks may not know is that HP has been in the security business for over 30 years across most industries and the geographies.

Secure Advantage is effectively the embodiment of all of HP security prowess or expertise, as services, products, and solutions, and as well as partners that we can offer organization to help them deal with security in business issues that we've been alluding to through this discussion.

The challenge that HP sees is that most folks worldwide may have developed a relationship with HP, perhaps for a server or a desktop businesses or a software and printing businesses. Many are unaware how wide and how deep HP's security expertise is, across the entire business spectrum.

HP has been developing this Secure Advantage Program over the last few years to essentially allow people to take a broader look at our security portfolio. I'll give you a specific example. I said we have been in the business for over 30 years now, and one thing that many folks aren't aware of is that HP has been engaged at the core of all the ATM networks around the world.

In fact, we’re directly involved in over 70 percent of ATM transactions. So, when you walk up to a bank, you put in your debit card or your credit card, you ask for $100 or 100 Euros, whatever it maybe anywhere around the world. Behind the scenes, HP technology, policies, and process have been worked on to ensure that the data is encrypted, that all of the banks and ATM network folks can talk to each other without necessarily knowing everything about them or who they are working with.

It’s secured through a set of processes. I am not going into the details obviously, but this is something that is an incredibly complex situation with a huge set of regulations on a worldwide basis about what can and can't be done, and what should be done. HP is right at the core of that, with encryption technology, with processes, with services and products that span the gamut. That is a really good example of where Secure Advantage comes into play.

We are engaged in the standards development behind the scenes. We have many patents and many processes that help these banks put together what they need to make it all work. That's the sort of expertise we bring, when we go talk to companies in situations where they need to implement tools such as identity management and access management tools. Does that make sense?

Gardner: Sure, it does. Mark Tice, tell us from Oracle's perspective, why is it important to have a complete solution approach to this? It seems like so many applications, so many different cracks, if you will, in the foundation. What’s the philosophy from Oracle in terms of getting a comprehensive control over identity and access management?

Tice: Well, one of the things that we really encourage, and this is where we get great alignment with the folks at HP.

One of the things that we really work hard to do is make sure that first off, before breaking ground on one of these projects, customers put in place a complete framework, or architecture for their security in identity management, so that they really have a complete design that addresses all of their needs. We then encourage them to take things on one piece at a time. We design for the big bang, but actually recommend implementing on a piece by piece basis.

Gardner: Let's get into a little more detail about how companies actually come to grips with this. You can't start solving the problem until you have a sense of what the problem is. How significant is this? How out of control are the access and identity solutions and safeguards in companies? Dan Rueckert, you want to take a step with that?

Rueckert: It depends, now that we start to think about each industry and those areas that have the regulations and compliance issues and standards of business. As Archie said, the financial services area is very sophisticated in a lot of things they do. Once again, it’s the speed of business and the changes from mergers and acquisitions that have started to occur.

When we get into more traditional business, maybe heavy process in certain aspects, you might see lesser controls. But now, as we start to get into access into certain areas of a process facility that tie together with the system, it starts to bring that together also. So, you have that different view.

Gardner: Let's look closely at the actual solutions. How do companies get started with this? Let's go to you, Archie. What are some of the first steps that you should take in order to gauge the problem and then start putting in the proper solution?

Reed: When we start thinking about security, one of the first things that people look at generally is some sort of risk analysis. As an example, HP has an analysis toolkit that we offer as a service to help folks decide what is critical to them. It takes all sorts of inputs, the regulations that are impacting your business, the internal drivers to ensure that your business not only is secured, but also moving in the right direction that you wanted to move.

Within this toolkit, called the Information Security Service Management (ISSM) reference model, is a set of tools where we can interview all of the participants, all of the stakeholders in that policy or process, and then look at the other inputs that are predefined, such as the regulations.

If you are in healthcare, you are looking at the Health Insurance Portability and Accountability Act (HIPAA). If you are dealing with credit cards, then you are looking at things such as the Payment Card Industry (PCI) standard, about how you have to handle the data, and whether you have to encrypt.

By having these things that are predefined, not only in terms of being more prescriptive for companies, which helps them a lot, but also being more accessible in terms of how quickly they can decide what's important, allows them to move on and decide in which order they’re going to implement their security strategy? They may already have pieces in place, and that's another part of the ISSM reference model that asks, “Where do you grade yourself on this, and where do you want to be?”

There is also in this gap analysis between what is and what should be or what is wanted. That allows the company to decide how they’re going to implement these sorts of things. That becomes a great way to then determine how to cost things out, and that's also an important factor for organizations.

Generally, beyond that, folks are looking at a triumvirate of focal points which shows this governance risk management and compliance (GRC), which essentially says, “Here are the drivers. What's the analysis that we are going to do, and what are the approaches we are going to take to deal with that?” And, they essentially align or deal with the contentions between business and security requirements.

Those sorts of things allow a company to get up to speed quickly and analyze where they’re at. You may have a security review every year, but a lot of companies need to do it more often in more isolated ways. Having the right tools come out of these sorts of things allows them to do ongoing assessments of where they’re at, as well.

Hopefully that's the bulk of the question, and we can go into a little bit more detail with Dan about how services help you do that.

Gardner: How about some examples? Do you have either companies we can talk about directly, or use-case descriptions, where you have gone in. What are some of the pay backs? What are some of the savings or risk-avoidance benefits?

Rueckert: Let me start. When you truly get at the basics and you have the right access at the right time, you start to look at whether you have someone waiting to have something done from a system perspective.

It takes time, it wastes time, and somebody not doing what they were hired to do as far as their general responsibilities. So, there are labor efficiencies that can be gained by having that type of access, and then you get into the number of incidents or request to a help desk to enable someone who says “I am having a problem, help me”.

You start to look at these labor efficiencies from just a pure IT perspective. If you don't have the things that you need to do your job, you then hit the bottom-line tremendously in the line of business in that value chain. So it can cascade out tremendously as far as that.

The other is access, as far as your partners in conducting business. If they don't have what they need from an external point, they can hold up payments or shipments that you might need. All different sorts of people rely on this. I need to validate, I need to know who you are, so then I can conduct my business as I need to.

Reed: Another way to look at this is, when you consider how companies today are not only trying to be more efficient, provide cost savings, analyze, and do more with less -- whichever way you want to phrase it -- there is also an approach that says, “Let's consolidate our datacenters. Let's bring everything together and minimize the amount of stuff on the network. Let's do whatever we can to try and resolve the sort of cost issues.”

Again, when you start to think about who can do what, who has access to what and how much can they do, regardless of how you do those consolidation efforts, you need to consider security.

So, I would also raise the HP Adaptive Infrastructure as an example of how we help customers deal with those challenges of reconciling between the two. Adaptive Infrastructure is essentially a portfolio that help customers at all their data centers, from the high-cost silos where everybody has their Internet on their own servers, and they all have their own hardware in place to low-cost pooled assets.

That allows an IT department to move to that service provider model that a lot are trying to get to, while meeting needs. We help customers evolve to the next-generation data center, 24/7, lights-out computing, blades in place, virtualization. You get that lower cost. You get the high quality of service, but you also cannot ignore the security as being a critical component to that.

I’ll give an example of some customers we’re helping with virtualization right now. Even in the virtualization space, where everybody is trying to get more from the same hardware, you cannot ignore things such as access control. When you bring up who has access to that core system, when you bring up who has access to the operating system within the virtual environment, all of those things need to be considered and maintained with the right business and access controls in place.

The only way to do that is by having the right IAM processes and tools that allow an organization to define who gets access to these things, because important processing is happening on the one box. You are no longer just securing the box physically. You're securing the various applications that are stacked on top of all of that.

Gardner: Of course if you get it right, it can be of great value as you move into other types of activities. Whether it’s taking advantage of application, modernization or virtualization, building out those next generation data centers, having your IAM act together so to speak, certainly there’s a strong foundation for doing these other activities better and with less cost and risk.

Tice: Dana, I’d like to jump in on that one. What we see when we first go into companies, when they don’t have this in place, is that most of their identity management work is done in silos. It's done in a department, or an app-by-app basis. The fact of the matter is that each department or each group has to make up their own security policies, implement them, and manage them. From a company perspective, it means that your security is only as good as your weakest department.

So, you've hit it dead on. Having the right policies in place, and then tools to manage and implement those, is critical. It means that you can act, instead of having to stop, think, and then act -- time, and time, and time again.

Gardner: Moving into the future road map, what we expect, it seems, is that not only is access management important for today’s infrastructure. As we continue to automate, ramp up rules and policies, and start using events-based inference and business intelligence, this also is a foundation for creating a more robust and increasingly automated approach to IT, as well as provisioning of services and application. This is particularly true, as we move into what we call cloud computing nowadays, where we are going to get applications and services from the variety of different sources.

So who wants to take the approach to the future, and have us build on that opportunity?

Rueckert: I’ll comment on just some of the things that are happening right now, and you haven’t talked about the mobility of employees.

We talked more traditionally about datacenters and maybe desktops, but now we have hand-held devices that are mobile in nature and contain a lot of power, and we need to make sure we validate that they can have access.

You can take simple examples of BlackBerry devices and other entities that now tie back into applications and key data that they need in the field, and can use wireless networks. It’s a tremendous benefit overall, as far as where we are going, and it’s why this is so important as we start to work towards the future.

Reed: I’d back that up by saying that, when we start to consider IAM, one thing we really haven't touched on, but sort of alluded to so far in the conversation, has been all of this process and other stuff that happens on the identity management side of house. The provisioning, the decisions, the policy management happens over the longer term. Access management is more of a defined policy and enforced in real-time. There is a lot of more to this overall aspect that relates to one of HP's core areas of expertise, management tools in general.

So, when we define the policies, when we decide what the procedures are for following that, we need good tools that allow you effectively to implement and write out what they are, and automate those policies and procedures, so that they are enforceable.

More importantly, over the longer term, changes occur. For example, in the last year alone, in 2008, there is an estimate of an extra 9,000 to 10,000 regulations that small to medium businesses must follow -- and that's not including what big businesses have to follow in terms of changes for the regulations they're already engaged in.

Now, consider the impact that has on being able to rewrite change, manage the policies across all of your business units, and consider what Mark was talking about in terms of businesses that have siloed security approaches. There is no guarantee, unless you have a comprehensive view over all of your systems, services, and business policies, that you can guarantee to the outside world that you are complaint.

Once we've got all this defined, we now need to monitor, and report at least internally, sometimes externally, that we are being complaint. This is another area where management tools and IAM in particular, allow you to say and prove that you have done what is required by the regulations.

Regulations are generally thought of as being driven by government bodies. If you deal internationally, that can mean a lot of different things in lot of different regions. But, regulations can also be internally driven.

They can be internal policies that you have decided as an organization need to be enforced, because you believe that if you want better customer service, you do things this way. Ultimately, it all comes down to making sure that the process is defined, is easily either automated or followed, and finally, and ultimately, reported on an adequate way -- whether it has been circumvented, incorrectly used, or, more generally, that the right thing was done.

Ultimately, it comes back to this discussion we had earlier, which is that GRC and things like IAM play a critical role in that. That's why we have chosen to go with the strategy that we have as HP, as part of Secure Advantage.

Working with folks like Oracle, who have some of the best tools out there in order to support certainly middle sized businesses, but also large organizations with huge, siloed security problems, different businesses, and different geographies. It’s a huge issue that companies need to resolve with tools, because there's no way to do it manually.

Gardner: Alright. Looking toward the next rev, if you will, of these tools, Mark Tice at Oracle, maybe you could outline what the plan for the future is for HP and Oracle working together and where the access management capabilities will come from? I surely don't expect their pre-announcements on products, but just a sense of where the technology is headed?

Tice: Sure. It runs down a couple of different threads. In your last question you touched on the cloud computing issue, and one of the things you will hear us talking about more and more in the future, is the emergence of identity management as a service.

That is, make it real easy for applications to leverage identity management services for access control, permissions, and such. Make it easy for them to access those. One, so that you can support a cloud environment seamlessly and easily. And two, you don't have to replicate a lot of security in identity management code in applications. You can have applications what do or they do best, which is support application logic and leave a lot of security infrastructure to tools like ours.

The second piece is in the area of quickly adapting to change. We see identity management right now as a 1.0 in a 2.0 piece, the very basics, like user provisioning, access control, single sign on, federation -- that is the ability to allow other entities from outside of your firewall and give seamless access for trusted sources.

We see this as kind of 1.0, the very basics that you put in place. Even in the 2.0 space, that's really where we see things like strong authentication -- that is making sure that people are who they say they are -- and tie this into real-time risk detection. So, if we are detecting fraud, we make sure that we challenge people to a fairly extreme degree, if we perceive there to be risk.

Also, in the area of real management, we see deriving a lot of access based on business function, as opposed to complex IT rules. As people move around in the organization, they do different things. As Dan pointed out, as they merge and such, access is controlled automatically, based on where people sit in the organization, and what they are working on, as opposed to IT rules. Those are a couple of the trends that we see on the technology side.

Reed: I just want to expand on those comments, as well as something that Dan mentioned earlier, which was the mobility aspect. If we’re truly looking at what's coming up, what companies need to deal with, and why this ability to be able to deal with change quickly and effectively is important, we have to look at the new employees that are coming into the market. We have to look at the new business situations or paradigms that organizations are dealing with.

The new employees are coming out of the universities these days. They've got all the Facebook and MySpace -- and all such things.

They’re also used to using their own kit. They're used to plopping down wherever they are, being able to work on what they want, using whatever equipment they want, and consider themselves masters of their own identity.

When they walk into a company, they would like nothing more than to be able to bring a hardware that they can use at home, can move around with, and still be able to access the resources they need to do the work that they have been asked to do.

We'd love for those to be HP bits of hardware, but the reality is, if you take a broader sense, you need to be able to deal with that situation. If you think about the companies and the way in which the things have been moving, that is to deal with more partners, they've got to deal with more outsourcing too, all of these situations where they are no longer in control of the identity of who is using their kit. They are responsible for it, but they may not be in control of it.

This is happening worldwide. The contractor market has been around for a long time, but is evolving in this respect. They expect to run their own equipment, but use your organizational resources to do their job. There are outsourced organizations that expect to get access to your blue prints to produce things for your company.

But you have all these regulatory issues that you have got to deal with, which require encryption, monitoring, and access controls to be in place. And again, these regulations are changing over and over. If we think more about the business sense than the technology sense, you've got to have available to the business users the tools that allow them to do those things in a secure manner, and allow them to adjust to the processes, as Mark was saying, in a rapid fashion, without compromising the security of the organization as a whole.

Gardner: So, in the future we'll have a number of different scenarios where the end point hardware might be any number of different options, only to extend that access and management to that individual, based on their role, their business process context, and so forth. Sounds like a very interesting time.

Reed: Absolutely. We've heard about the borders to the company not being anywhere, the castle metaphor thing -- being broken down. The network is no longer Secure in and of itself. There is no perimeter.

I fully expect that within the next five to ten years we will be carrying around all of our data and all of our essential knowledge on memory sticks or in the cloud, and that will be all it needs to sometimes get to work. There will be devices everywhere that we should be able to use -- be it a mobile phone, a mobile device, right through to a huge, honking desktop that just happens to be there.

Gardner: And IAM is really the key to unlocking that sort of a flexible future.

Reed: Yes. Fundamentally, IAM is about managing those relationships between who is coming into the network, who is getting access to things, why are they getting access, how, and when are they allowed to do that.

Gardner: And, when done right, there are many different benefits, not only risk reduction, but as we had been discussing, now we look into the future with a lot more flexibility in terms of how IT can be distributed and used.

Great. We have been talking about identity and access management, it's impact on security and risk, some of the new opportunities for using this in different scenarios, including cloud computing and distribution of a variety of devices, sometimes not even the organizations or the enterprises devices.

Helping us weed through some of these topics, we have been joined by Dan Rueckert, a worldwide practice director for security and risk management, at HP, C&I. Thank you, Dan.

Rueckert: Thank you, Dana.

Gardner: I have also been joined by Archie Reed, distinguished technologist in HP security office also in C&I. Thank you, Archie.

Reed: Thank you.

Gardner: And, Mark Tice, vice president of identity management at Oracle. Thank you, Mark.

Tice: Thanks, Dana, Archie, and Dan. Thanks for inviting me to attend.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Come back next time for more insights on IT strategies. Bye for now.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

For more information on HP and Oracle Identity and Access Management.

For more information on HP Secure Advantage.

For more information on HP Adaptive Infrastructure.

Transcript of a BriefingsDirect podcast the role of identity and access management in the changing enterprise. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Sunday, November 16, 2008

BriefingsDirect Analysts Review New SOA Governance Book, Propose Scope for U.S. Tech Czar

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 33, on the role of governance in SOA adoption and the outlook for IT initiatives in the Obama administration, recorded November 7, 2008.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition Podcast, Volume 33. This periodic discussion and dissection of IT infrastructure related news and events, with a panel of IT analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS visual orchestration system. I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions.

Our topics this week, the week of November 3, 2008 are services-oriented-architecture (SOA) governance, how to do it right, its scope, its future, and impact. We'll be talking with Todd Biske, author of the new Packet Publishing book SOA Governance. Todd is also an enterprise architect at Monsanto. We'll also be looking at this historic election week. The presidential election results are now in, and we're going to view the impact through an IT lens.

Our panel will focus on the IT policies that an Obama administration should pursue, as well as ruminate about what a cabinet-level IT director appointee might do and might accomplish. To help us dig into SOA governance and think about what a new national IT policy might be, we're joined by this weeks panel. Please welcome Jim Kobielus, senior analyst at Forrester Research. Howdy, Jim?

Jim Kobielus: Hi Dana, hi everybody. Good morning and afternoon, wherever you are.

Gardner: And also Tony Baer, senior analyst at Ovum.

Tony Baer: Hey, Dana, good to be with you here again.

Gardner: Let's also welcome our guest. This is not his first appearance. He's been on several times before -- Todd Biske. Welcome back, Todd.

Todd Biske: Hi Dana. Thanks for having me back.

Gardner: Let's just dig right into your book, Todd. Tell us why you decided to write a book on SOA governance. This is not something that people bring up around the dinner table at night.

Biske: It certainly isn't. It's funny that I actually got to speak at the young authors' program at my kid's school, and thought that they probably don't care when they're in kindergarten or fourth grade about SOA governance, but it was a good time.

The reason that I decided to write a book on this is actually two-fold. First, in my work, both as a consultant, and now as a corporate practitioner, I'm trying to see SOA adoption be successful. The one key thing I always kept coming back to, which would influence the success of the effort the most, was governance. So, I definitely felt that this was a key part of adopting SOA, and if you don't do it right, your chances of success were greatly diminished.

The second part of it was when the publisher actually contacted me about it. I went out and looked and I was shocked to find that there weren't any books on SOA governance. For as long as the SOA trend has been going on now, you would have thought someone would have already written a book on it. I said, "Well, here's an opportunity, and given that it's not really a technology book, it's more of a technology process book, it actually might have some shelf life behind it." So I decided why not, give a try.

Gardner: I've heard this several times in many different places that SOA governance should not be linear in relationship to SOA, but at the beginning, the middle, really simultaneous to any SOA infrastructure activities. Is that a basic content of your book?

Key governance message

Biske: Yes it is. The way I wrote the book was to actually use a management-fable style. There's a fictional story that goes throughout the book. It starts from step one, when there is some grassroots effort of someone interested in applying Web services technology, or REST, or whatever it is, and trying to broaden to scope of that, and how it expands into an enterprise initiative.

The key message in this is that the reason companies should be adopting SOA is that something has to change. There is something about the way IT is working with the rest of the business that isn't operating as efficiently and as productively as it could. And, if there is a change that has to go on, how do you manage that change and how do you make sure it happens? It's not just buying a tool, or applying some new technology. There has to be a more systematic process for how we manage that change, and to me that's all about governance.

Gardner: Now, risk avoidance is a top of mind for a lot of IT folks, as they embark on SOA activities. I suppose the risk on one side is that if you don't do it enough, it doesn't take off, doesn't get traction, there is not an adoption, and so your efforts and your investments are not well paid back.

The other risk is that you go too far too quickly and you have too much success with SOA. Perhaps it spins out of control, and complexity, lack of monitoring and enforcement become issues. The important thing here with risk is to find that balance. Governance, I suppose, is sort of a knob, if you will, on how to get and maintain that balance.

Biske: I would agree with that approach to it. The very first step that helps to manage that risk is defining the target state you want to get to. What's the desired behavior for your organization? I think the two scenarios you described both come about by not having an end state in mind.

If I just blindly say, "We're going to adopt SOA," and I tell all the masses, "Go adopt SOA," and everybody starts building services, I still haven't answered the question, "Why I am doing this, and what do I hope to achieve out of it."

If I don't make that clear, I could easily wind up with a whole bunch of services and building a whole bunch of solutions. I'll have far more moving parts, which are far more difficult to maintain. As a result, I actually go in the opposite direction from where I needed to go. If you don't clearly articulate, "This is the desired behavior. This is why we're adopting SOA," and then let all of the policy decisions start to push that forward, you really are taking a big risk. It's an unknown risk. You're not managing it appropriately if you don't have an end state in mind.

Gardner: Before we go to our other panelists, maybe you could tell us about your fictional insurance company, which you call Advasco, I believe. Tell us the story inside this book.

Biske: Sure. It's a large financial conglomerate, starting out in the insurance industry, but they also expand through acquisition into other financial product areas.

Gardner: It's probably not as large as it was when you started writing the book, right?

Biske: Probably not, although I don't think I made any mention of mortgage-backed securities anywhere in the book. So, it's probably one of the institutions that have survived.

Biske: It starts out with an emphasis from the business leaders that they need to improve their position with their customers. They're continually getting dinged. They've got different sales staff coming at them with no idea about the different financial products that they hold. Sales people are competing with each other.

So, there's this initiative to say, "We need to improve our customer image," and that begins the path toward SOA by saying, "Let's focus on the customer, customer-related services, and build that up." But, it's only within their insurance line, not quite enterprise wide.

I use that example that when it tries to broaden beyond that, other people in the story come along and say, "Well, that's not my initiative. I am not going to participate in that," and it covers some of the political battle that you can get into in an organization, when everybody has a different set of priorities.

Over the course of the book, they begin to see the benefits of adopting this -- how it impacts their development efforts, and how that actually winds up delivering business value as a result. Along the way, they make a series of missteps that cover the aspects of traditional project governance, such as building services the right way. Then, branching out into, "How do we expand this beyond the initial set of customer services. We can't just build on services blindly."

So, there's a discussion around how to determine the right services to build. It gets into that pre-project governance area, which goes beyond IT and to the business side of the company.

The last piece of it talks about the runtime aspects. They go from internal services that are just used within the company, to exposing services outside the company. They have a situation where their systems start to fail and, because they didn't have effective runtime governance, they go through al large exercise to try to figure out the source of the problem and correct it. They uncover as result of that the need to have policies and governance around how the external parties that use their services are able to access them and how to manage that piece of it.

We cover the whole project lifecycle, as well as aspects outside of the project lifecycle, more of the portfolio planning, project decisioning, and getting into the more traditional areas of IT governance.

Different types of governance

Gardner: You've mentioned a couple of different types of governance. There's IT governance, runtime governance, and SOA governance. Is it right to look at it this way, that there are different types of governance that need to be federated? Or, should we think about it more like we need to get one umbrella governance, perhaps call it SOA governance, but have it take on more and more aspects of these other flavors?

Biske: There's kind of a federated or hierarchical approach to it, and there are two different ways of looking at federated governance. I want to come back to that. If you look at traditional IT governance, it is more about what projects we execute, how do we fund them, and structuring them appropriately, and that has a relationship to SOA governance. It doesn't go into the deep levels of decisions that are made within those projects.

If you were to try to set up a relationship, I would put IT governance, and even corporate governance, over the SOA governance aspects, at least, the technical side of it. The other piece of that is, when we talk about runtime governance, IT governance probably is focused on the runtime aspects of it. That's really a key part of this, making sure that our systems stay operational and that the operational behavior of the organization is the way we want it to be. So there is a relationship between them.

With the notion of federated governance, in addition to the hierarchical nature, we also have to look at the structure of the organization. If it's a very large organization with multiple lines of business -- and this is something that Jeanne Ross covered in her IT governance book -- you may have one line of business that is interested in growing very rapidly and another line of business that is in a cost-containment mode. We have to factor those two governance models into the decisions you make in how you leverage IT.

If you try to choose some standard technologies that you are going to use across the entire enterprise, you are going to run into problems, where you have competing priorities of the one line of business, which is trying to move as quickly as possible and really energize that growth, being forced to use some standard technologies to where the processes may not have been matured yet. That could slow them down. At the same time, the group that needs to have cost containment is probably all for those. So we have to balance that federation as well.

Gardner: It's a fascinating subject, and I do think it is part and parcel with SOA. It even goes beyond that, and we can get into that a little. I'd like to remind our listeners, that your book is now currently available on amazon.com, is that right?

Biske: That's right, Amazon.

Gardner: So, if I were to go to Amazon, I just do a search on SOA Governance, or "Todd Biske," or both and I might just easily find it. Is that right?

Biske: Yes, that is correct.

Gardner: Well, let's go to my panel. Tony Baer, do you agree that SOA governance is really so important from soup to nuts, start to finish, lifecycle for SOA to be successful?

Baer: In the grand scheme of things, the answer would be yes, but you also have to look at what the scope of your SOA effort is going to be. Just this morning, I was reading a piece from one of our panelists, Dave Linthicum. He was saying, based on Gartner figures, that, from an enterprise-wide standpoint, interest in beginning or continuing SOA projects was going to drop pretty markedly this year. So, you need to look at it in terms of, "Are we are looking at enterprise-wide transformation, something more tactical?"

My sense is that, given the current economic environment, you're going to see a lot more in the way of tactical projects. From that standpoint, this hooks into an issue that we were discussing in an internal meeting yesterday as to what level you take governance. I want to take a closer look at this. I don't have any fully formed conclusions on this yet, but I think that most organizations are still looking at SOA in the coming year, but looking at it in a much more restricted scope, as opposed to a an enterprise-wide transformation.

We need to look at some jump-starts in a sensible, sort of "lite," like, L-I-T-E governance. That's governance that basically federates, or is compatible with, the software-delivery lifecycle. And, when we get to runtime, it's compatible with whatever governance we have at runtime. That's an area that's very complicated, because you start dealing with different organizations that own different pieces of it.

The software-developing organization owns the architectural implementation of SOA. You have the business that owns the service, and you have the IT operations group that owns the data center runtime.

So, it's not a simple answer. Also, given the level of likely interest in SOA in the coming years, I think we're going to have to be a lot more tactical, and we are going to have to be a lot more light-footed to start off with.

Differing views of SOA's future


Gardner: I'd like to point out that the interpretation that SOA is going to ratcheted at back is not the only one out there. I was just on a webinar a few days back with Sandy Rogers from IDC. Some of her research shows that, in fact, SOA is ramping up and moving into that enterprise-wide phase. There might be economic impacts on certain vertical industries, but there is more than one way to look at SOA in terms of its adoption.

With that said, Jim Kobielus, what's your position on SOA governance, and do you think there is a need for an SOA Governance Lite at this time?

Kobielus: "SOA Governance Lite." I was rolling that phrase around in my head, as it came out of Tony's mouth. Yeah, what exactly would SOA Governance Lite constitute? Tony, I want to hear from you first. Do you have a definition?

Baer: Well, you're looking at potential for reuse, but you are not using it as a major criterion, because, at this point, you're not at any level of certainty, as to whether you will be achieving reuse. This touches on an area that we have also discussed in this venue many, many times. The objective of SOA is to achieve reuse, but it's really to achieve business agility. Therefore, whether we shoot for reuse, initially or not, it will not necessarily be the ultimate measure of success for a SOA initiative. SOA Governance Lite would not emphasize very heavily the reuse angle to start off with. You may get to that at Stage 2 in your maturity cycle.

Kobielus: That's a good working definition of SOA Governance Lite, and I agree with that. Well, I agree with that from the point of view of just looking at the times that were in right now, some pretty nasty times. The economy looks like it's going to go deeper down the tubes, before it gets any better.

At Forrester, we like to pitch most of our research in terms of tying it to what we call our customers' success imperatives. That's a very optimistic way of looking at things, like, "You should invest in business intelligence (BI), data warehousing, and so forth, because it will help you succeed, be innovative and agile, and transform the organization." You can look at SOA as a success-oriented architecture.

The flip side right now is that you can look at it as a survivor-oriented architecture. You have a survival imperative in tough times. Do you know if your company is going to be around in a year's time? The issue right now in terms of SOA is, "You want to hold on and you want to batten down the hatches. You want to be as efficient as possible. You want to consolidate what you can consolidate in terms of hardware, software, licenses, competency centers, and so forth. And, you're probably going to hold the line on investment, further applications, and so forth."

For SOA, in this survival oriented climate that we're in right now, the issue is not so much reusing what you already have, but holding on to it, so that you are well positioned for the next growth spurt for your business and for the economy, assuming that you will survive long enough. Essentially, SOA Governance Lite uses governance as a throttle, throttling down investments right now to only those that are critical to survive, so that you can throttle up those investments in the future.

Gardner: What do you think Todd Biske? Do we need a "lite" version of SOA governance? Is it also a way to scale up as well as scale down, so it's insurance, regardless of the business environment?

Biske: I'm not a believer in the term "lite" governance. I'm of the opinion that you have governance, whether you admit it or not. An alternative view of governance is that it is a decision-rights structure. Someone is always making decision on projects.

The notion of Governance Lite is that we're saying, "Okay, keep those decisions local to the project as much as possible. Don't bubble them up to the big government up there and have all the decisions made in a more centralized fashion." But, no matter what, you always have governance on projects. Whether it's done more at the grassroots level on projects, or by some centralized organization through a more rigid process, it still comes back to having an understanding of what's the desired behavior that we are trying to achieve.

Where you run into problems is when you don't have agreement on what that desired behavior is. If you have that clearly stated, you can have an approach where the project teams are fully enabled to make those decisions on their own, because they put the emphasis on educating them on, "This is what we are trying to achieve, both from a project perspective, as well as from an enterprise perspective, and we expect you to meet both of those goals. And if you run into a problem where you are unsure on priorities, bubble that decision up, but we have given you all the power, all the information you need. So, you're empowered to make those decisions locally, and keep things executing quickly."

Gardner: Todd, I want to just pick up quickly on one thing you mentioned, which is that you are doing governance, whether you recognize it or not. Are there certain telltale signs that an organization is at the point where its governance is happening in stealth mode, that they need to start getting more methodological and concrete about how they address it? Are there any telltale signs from either your fictional company or ones you have dealt with that are harbingers of governance that needs to happen, and in a better way?

Biske: Telltale signs are when you are having meeting after meeting with people disagreeing and saying, "Well, my management told me this is my priority," and somebody else is saying, "My management is telling me this priority."

That can be at the project level, where you have the project manager telling the developers, "I don't care what the enterprise architects have told you, we've got to get this solution delivered by this date. Whatever you have to do to make that happen, go do it." Versus two more-senior managers in the organization debating who is going to fund this service or have their team manage the service once it's written.

I have both of those scenarios in the book, where there are meetings and we have people debating this. And, we have to have mediation that says, "Hey, this is our priority. This is the direction that's been given from the CIO or center of excellence. This is the priority behind it." And there are cases where you will have competing priorities, and you have to have a structure on how to resolve those situations, and who are the right people to get involved to say, "This priority takes precedence in this case."

Kobielus: What Todd said is exactly correct. If you're going to define SOA Governance Lite, it really has to be in more of a federated, decentralized, negotiated environment, where CTOs, CIOs, and lower-level IT people get together and collectively build coalitions around best practices.

Maybe one competency center takes the lead in a particular area of SOA, and another competency center from another business unit takes a lead in another area. And, collectively among themselves, laterally, they put together best practices that drive everybody, as opposed to the hierarchical, top-down, command-and-control SOA governance that we should regard as SOA governance "heavy," as the alternative.

Gardner: Todd, when you mentioned these meetings as harbingers of potential problems, it reminded me of Agile Development, Scrum, and the role of a ScrumMaster. Are there any parallels between, on the development level, what people hope to accomplish through Agile and the use of Scrum, and what SOA governance can offer at a higher abstraction at the services level, and in helping businesses to accomplish their business goals.

Biske: Yeah, there are some parallels. The ScrumMaster is the ideal methodology, where they emphasize the need for the team to come together often, but in a small group, to keep everybody on the same page with what the targeted goals are. They empower them then to go off and do the work and not spend all their time in meetings. The same holds true here. If you don't have that common vision and common understanding across all parties involved, people start to drift away and have their own opinions on the right thing to do. That's where you run into problems.

Gardner: Is there anyone else who want to offer any comment, before we move on to the next subject?

Baer: I'd definitely agree with that. This is coming from someone who initially was very much a skeptic about Agile and all those very localized methodologies. Ultimately if you take a look at our what SOA is architecturally, it is loosely coupled, and it's supposed to foster business agility. That's very compatible with the ideals of Agile software development, which essentially looks at software development as very loosely coupled, but compatible, activities. So, I would agree there 1,000 percent with Todd.

Biske: Another parallel we can draw to this is the current economic crisis. The risk you have in becoming too federated, and getting too many decisions made locally, is that you lose sight of the bigger picture. You can look at all of these financial institutions that got into the mortgage-backed securities and argue that their main focus was not the stability of the banking system, it was their bottom line and their stock price.

They lost sight of, "We have to keep the financial system stable." There was a risk in pushing too much down to the individual groups without keeping that higher vision and that balance between them. You can get yourself in a lot of trouble. The same thing holds true in Agile development. There are people who may be more critical of it saying, "What if we go too far and let everybody do their own thing? We may struggle as an enterprise in bringing that all back together. "

You have to have the right balance of some centralized viewpoint -- this is the direction we need to go - but still empower the local teams that can execute efficiently.

Baer: Todd, I have a question for you there. There's a great example there with the current crisis. We need to have acceptable risk management and risk mitigation standards on an enterprise-wide level, while still providing empowerment to local teams to accomplish that goal in whichever way they see as compatible with the larger objective. How detailed and comprehensive should the vision, goal, or mission be defined from above, versus what's defined from below?

Biske: The key aspect is that you have to have something that's measurable at both levels. In one chapters in the book, I have an example, where the CIO talks, but keeps it at this vague "we want to adopt SOA" type vision. That's is a rallying cry that people can jump behind, but it lacked the ability to specify where we want it to go. I do think it needs to trickle down to a high level measurement, saying, "We want to reduce the average time it takes to get a solution out by 10 percent," or, "We want to reduce the time it takes us to identify the cause of a production problem by 25 percent."

That's a measurable goal that at a high level that we can continue to monitor. If we're not achieving it, we can start asking, "Why are we not getting there?" But, that needs to drill further down into much more fine-grained policy that applies at those local levels. We can then come back and say, "You know what, this is our goal. We don't have a goal to improve the accuracy of our initial budget or initial schedule estimate on these projects."

You can use that when you're in the situation of project manager saying, "I've got to meet this date," versus the technical team saying, "But, if we don't do it this way, we may be inhibiting our agility down the road." So, having those measurable stated goals, if we're not achieving them, we can go back and adjust things. That's the key to it.

Gardner: Todd, we've talked a little bit about scaling governance down to a more tactical level. Recently, there has been a lot of discussion about cloud computing and sourcing services from different providers, through on-premises or private grid or utility or cloud-type of provisioning and infrastructure. It seems that there's not only a need for Governance Lite types of adjustment and flexibility, but perhaps governance maximum, where you might be starting to get services through hybrid environments. We've also heard recently people who are saying that SOA capabilities and competencies are a precursor to be able to do cloud properly.

What's your position? If you do SOA Governance Lite, does that actually put you in an advantageous position to take advantage of cloud across a variety of internal or external sources?

Biske: I think I fall into the later category. You have to have SOA in place to be able to make the right decisions around cloud computing. It's too bad that Joe McKendrick couldn't be on the line on this one, because he and I had a blog exchange, probably about three years ago. He made the statement that the adoption of SOA was going to increase the amount of outsourcing that went on, and this was before the cloud computing term really got hot.

My counter to that was, I don't know that it's going to actually create any more or less outsourcing. What it should do, if we do it right, is have more successful use of cloud computing, or outsourcing of particular services within there.

If I know that I've got a particular service and I've got measurable goals on what I hope to achieve through those services, I can make the right decision on whether the best way to handle this service is to source it internally or to go to an outside source, and what the cost implications of that are.

Where we get ourselves into trouble is in hoping that going to cloud computing or to software as a service (SaaS) is going to make things better. But, wbut we don't have any way of both measuring where we are today, and what the factors are that are causing us to think negatively about it, as well as, measure it when we switch to a different sourcing model with it, and make sure that we are seeing the improvement that we wanted to get out of that.

Having the right policies in place is what we have to achieve and is key, whether you host those services internally or externally.

Gardner: Now, this book is designed for practitioners. It's hands on. It's to help people actually get going and use governance properly. Is that right?

Biske: Yes.

Gardner: The name of the book is SOA Governance, it's by Todd Biske and the publisher is Packet Publishing. Thanks for sharing your insights. I look forward to reading it.

Biske: Thank you, Dana.

National IT director?

Gardner: Well, let's move along to another governance issue. It's the government, and how would governance help its own IT apparatus. Billions of dollars are spent, perhaps not all of them most productively, on IT across many, many different government agencies. There's lots of redundancy, lots of overlap, not much reuse, siloed individual budgets, individual hierarchies of authority, and responsibility across these government agencies.

Now, we have a new administration, very much with a message of hope, a transformation. It's also stated along the way that it plans to have a higher profile for IT, perhaps with a more holistic or horizontal take across the multiple dimensions of the government. We're faced with this situation of what would a cabinet-level IT director do -- and what should they be focused on in terms of priorities?

Let's go first to Tony Baer. Tony, let's say you get a call in two weeks, and it's Barack Obama on the phone. He says, I'm going to pay you your regular rates, but I want you to help me figure out what I am going to do with this IT director guy. What advice would you give him?

Baer: I would tell him to go out and speak to Todd Biske first. Obviously, you need somebody who is going to -- and for want of something good, I am going to give you a cliché here -- just think outside the box. Basically, the government has long been a series of lots of boxes or silos, where you have these various fiefdoms. Previous attempts to unify architectures at the agency levels have not always been terribly successful.

As far back as the '80s, the Defense Department's continuous acquisition and lifecycle support (CALS) initiative was just so vague. It was almost impossible to answer the question, "What is a CAL?" This gets back to what Todd points out in his book. You need to have a clearly stated, measurable objective. So, the chief priority for anybody who is a CIO, or who is going to step into some sort of CIO-type of role at the cabinet level, above the agency level, is someone who is going to look for getting more out of less.

That's essential, because there are going to be so many competing needs for so many limited resources. We have to look for someone who can formulate strategic goals -- and I'm going to have to use the term reuse -- to reuse what is there now, and federate what is there now, and federate with as light a touch as possible.

Gardner: It seems that the priorities that we're hearing out of the Democratic Party have to do with dealing with the economy, the financial crisis, energy, and also climate change. A lot of these really strike me as issues that have a great amount of technology as part of their solution. Jim Kobielus, when technology is better deloyed and used, and perhaps modernized around SOA principles, how much of an impact can it have on these government problems?

Kobielus: If you look back at Obama's positions from about a year ago, All Things Tech, it was a fairly comprehensive, and deep set of positions on a broad range of tech topics. SOA, of course, figures into any of this positioning. I doubt that Obama, Biden, or anybody high-level in this coming administration, knows or cares what SOA is, but really it comes down to the fact that they're driving at many of the same overall objectives that also drive SOA initiatives.

One initiative is to breakdown silos in terms of information sharing between the government and the citizenship, but also silos internally within the government, between the various agencies to help them better exchange information, share expertise, and so forth. In fact, if we look at their position statement called "Bring government into the 21st century," it really seems that it's part of the overall modernization push for IT and the government. They're talking really about a federated SOA governance infrastructure or a set of best practices.

Such things as the fact that the national CTO that Obama has been calling for at least a year or so, wasn't a huge issue on the campaign trial. This National CTO, it seems to me from the the sketchy description, would essentially broker discussions between agency-level CTOs to get them to share best practices, and provide each other with a forum, within which they can maximize reuse of key government IT infrastructure for multi-agency, or nationwide initiatives.

Getting to your question, tech modernization in the government is absolutely essential. Reuse and breaking down silos between agencies is critically important. Brokering best practices across the agencies, specific silo IT and CTO organizations, is critically important. It sounds to me as if Obama will be an SOA President, although he doesn't realize it yet, if he puts in place the approach that he laid out about a year ago, considering that the IT infrastructure in the government is probably right now the least of his concerns.

Gardner: Well, he certainly seems to get the Internet. He's really mastered that better than any politician at that level before. So, I expect we'll see a lot of emphasis on how the government reaches out to its constituents, and also interacts among between its various elements and building blocks using the Internet that's loosely coupled in a SOA sort of mentality.

Let's go again to Todd Biske. Todd, do you think that SOA is the right balm for this itch, the government's integration mess?

Biske: SOA definitely has a role in it. You could probably pick just about any technology and say that there is a potential for it to make it better. It's interesting that I definitely agree with the use of technology. I just brought up the Obama app on my iPhone, and I actually have all of his statements on the technology issues right here at my disposal, which is a great use of the technology.

But, he definitely has a challenge, and I am thinking from a governance perspective. He has taken step one, in that the paragraph that Jim just mentioned, of bringing government into the 21st Century. He has articulated that this is the way that he wants our systems to interact and share information with the constituents.

The next step is the policies that are going to get us there, and obviously he's time-boxed by the terms of his presidency. He's got a big challenge ahead of him, or at least the CTO that gets appointed has a huge challenge. Somehow, you have to break it down into what goals are going to be achievable in that timeframe.

As an example, I was at a recent SOA consortium meeting. I don't remember which branch of the government was actually presenting at the time, but they talked about the effort that they went through to get everybody on the same page for the goals of an SOA-related initiative, and they spent about 18 months in meetings trying to do that.

In terms of the fiefdoms that exist out there, there are some big challenges, and this may be a situation where we do need to have a bigger stick and a little bit heavier governance to get some of these things moving at a quicker pace. Certainly, the agencies all are trying to adopt SOA. It's just that the scope of their problem is something that's hard to fathom. So we'll just take it a step at a time.

Baer: I think his initial priorities will be not so much internal as external. I was just reading here that he just appointed a member to his transition team, someone who came from Interactive Corporation, which is of course very heavily invested in various online commerce sites and social sites.

But, I think his initial priorities are going to be more on areas such as net neutrality, and on extension of broadband. The internal transformation to promote more federated and more transparent information sharing is going to become more of a Phase 2. He can't do everything at once, when he takes the office.

Biske: You know I am going to jump in now, one way to look at a president's style is whether they they govern in the same way they campaigned? One of the flaps against George W. Bush is that, once he took office he continued to govern sort of like he campaigned. I heard similar criticism against Bill Clinton early on as well.

If the campaign that just concluded is any prelude, then Obama is going to rely heavily on the Internet, on the Web, on new media, on social networks, on spam, robocalls and so forth, to reach out to, franchise, inform, alert, and possibly irritate and annoy the citizen, as a way of breaking down the silo between the government and the citizens. I don't know if that's a good thing or a bad thing.

Gardner: It's certainly shows that Obama seems to view technology as the solution, rather than technology as the problem. Lets get back to this CTO of the United States. Now whether they have an internal focus, which is on how to get the government to behave better in terms of its IT use and productivity, or an external focus, which is how could we make America more competitive in terms of our broadband, standards, use scenarios, freeing up airwaves, and ensuring there's net neutrality, those sort of things.

It seems to me that they are not incompatible. They should probably go hand in hand. But what kind of person should this be? If you were to look at the resume and try to come up with the right mix, is this someone a politician? Is this someone who is very good administrator, or who understands tech? All of the above? What would you look for in such a person? Should we go to private industry, the head of the larger vendors, for example, and try to recruit them? Any thoughts?

Baer: Two words: Al Gore, because first of all, obviously he knows tech. Secondly, he invented the Internet -- ha, ha! But, he knows tech and he's passionately concerned with it. Certainly, he's a politician. You have to be a politician in this world. He can't be the administrator. He's going to be a policy maker or broker.

Gardner: Al Gore, also on the board of directors of Apple Computer, is at the top of your list?

Baer: If we were to have a national CTO, which I am not entirely sure we should, under a Democratic president, I think that Gore would probably be on Obama's short list.

Gardner: How about you Todd Biske? Do you have any, if not names, at least job descriptions that you think they need?

Biske: Well, I don't have any names, but I do think Al Gore is an intriguing one, and I like the reasoning behind that. I got some exposure to this with the last SOA consortium meeting. In the world of IT in the federal government, and the world of IT in the typical corporation, which is more of my background, there are just huge differences between the two.

You need to have somebody who has some experience dealing with technology in the federal government. As far as bringing somebody in that's a complete outsider to that world, I don't know how effective they would be, unless somebody gave them a really big stick. The political background is critical. Knowing that a lot of these changes, and some other things that we want to see happen come back to governance, the better you are at politics, the more that you can bridge the gap between the competing priorities. That's an important aspect of it as well.

Gardner: It's another feather in Al Gore's cap that he was deeply involved with the reinventing of government initiatives under the Clinton administration.

Baer: I couldn't agree with Todd more, in terms of the fact you're going to need somebody with political savvy. In most ways, it's not like corporate environments, which have different forms of accountability. The fact is that at the end of the day, you're dealing with government employees who are civil servants and are there primarily for the benefits. They are not there for trying to earn huge amounts of money, and take the greater levels of risk in the private sector.

I'm thinking of a project that a colleague of mine is involved with right now with one of the big agencies in New York state government, a requirements management project. This is something that has been very heavily pushed by somebody, if not the CIO, somebody very close to his level. The business analysts are stonewalling it like crazy, and even though this has been directed from above, the permanent bureaucracy has just been very resistant to it. There's lots of inertia.

Not that he has voiced any interest in it, but you're not going to have somebody like Eric Smith from Goggle parachuting in. Someone like an Al Gore, or maybe someone a little less well known, but equally experienced in the public arena, is going to be a much more suitable choice.

Gardner: I guess we can be assured that it won't be Carly Fiorina. All right, I would like to thank our panelists. We are out of time. We really enjoyed the discussion about SOA governance, and I think we will be coming back to this issue of national policy around IT quite a bit over the next couple of years on BriefingsDirect Analyst Insights. I want to thank our panelists, Jim Kobielus, senior analyst at Forrester Research. I appreciate your input.

Kobielus: Oh, no problem. I enjoyed it.

Gardner: Tony Baer, senior analyst at Ovum. Thanks again, Tony.

Baer: A great post-election session.

Gardner: I also want to thank our guest Todd Biske, an enterprise architect at Monsanto and the author of the new book, "SOA Governance." Thanks, and I hope you come back again, Todd.

Todd Baer: Thanks, Dana. I really enjoyed the conversation.

Gardner: I also want to thank our charter sponsor for BriefingsDirect Analyst Insight Edition Podcast series, Active Endpoints, maker of the ActiveVOS, Visual Orchestration System.

This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Transcript of BriefingsDirect podcast on the role of governance in SOA adoption and the outlook for IT initiatives in the Obama administration, Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Friday, November 14, 2008

Interview: rPath’s Billy Marshall on How Enterprises Can Follow a Practical Path to Virtualized Applications

Transcript of BriefingsDirect podcast on virtualized applications development and deployment strategies as on-ramp to cloud computing.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: rPath.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on proper on-ramps to cloud computing, and how enterprises can best prepare to bring applications into a virtual development and deployment environment.

While much has been said about cloud computing in 2008, the use of virtualization is ramping up rapidly. Moreover, enterprises are moving from infrastructure virtualization to application-level virtualization.

We're going to look at how definition and enforcement of policies helps ensure conformance and consistency for virtual applications across their lifecycle. Managing virtualized applications holistically is an essential ingredient in making cloud-computing approaches as productive as possible while avoiding risk and onerous complexity.

To provide the full story on virtual applications lifecycle, methods and benefits, I'm joined by Billy Marshall, the founder of rPath, as well as their chief strategy officer. Welcome to the show, Billy.

Billy Marshall: Thanks, Dana, great to be here.

Gardner: There is a great deal going on with technology trends, the ramp up of virtualization, cloud computing, services-oriented architecture (SOA), use of new tools, light-weight development environments, and so forth. We're also faced unfortunately with a tough economic climate, as a global recession appears to be developing.

What’s been interesting for me is that this whole technological trend-shift and this economic imperative really form a catalyst to a transformative IT phase that we are entering. That is to say, the opportunity to do more with less is really right on the top of the list for IT decision-makers and architects.

Tell me, if you would, how some of these technology benefits and the need to heighten productivity fit and come together.

Marshall: Dana, we've seen this before, and specifically I have seen it before. I inherited the North America sales role at Red Hat in April of 2001, and of course shortly thereafter, in September of 2001, we had the terrible 9/11 situation that changed a lot of the thinking.

The dot-com bubble burst, and it turned out to be a catalyst for driving Linux into a lot of enterprises that previously weren't thinking about it before. They began to question their assumptions about how much they were willing to pay for certain types of technologies, and in this case it happened to be the Unix technology. In most cases they were buying from Sun and that became subject of a great deal of scrutiny. Much of it was replaced in the period from 2001to 2003 and into 2004 with Linux technology.

We're once again facing a similar situation now, where people, enterprises specifically, are taking a very tough look at their data center expenditures and expansions that they're planning for the data center. I don't think there's any doubt in people's mind that they are getting good value out of doing things with IT, and a lot of these businesses are driven by information technology.

At the same time, this credit crunch is going to have folks look very hard at large-scale outlays of capital for data centers. I believe that will be a catalyst for folks to consider a variable-cost approach to using infrastructures or service, perhaps platform as a service (PaaS). All these things roll up under the notion of cloud, as it relates to being able to get it when you need it, get it at variable cost, and get it on demand.

Gardner: Obviously, there's a tremendous amount of economic value to be had in cloud computing, but some significant risks as well. As we look at how virtualization increases utilization of servers and provide the dynamic ability to fire up platform and instances of run-time and actual applications with a stack beneath them, it really allows companies to increase their applications with a lower capital expenditure upfront and also cut their operating costs. Then, we can have administrators and architects managing many more applications, if it's automated and governed properly. So let's get into this notion of doing it right.

When we have more and more applications and services, there is, on one side, a complexity problem. There is also this huge utilization benefit. What's the first step in getting this right in terms of a lifecycle and a governance mentality?

Marshall: Let's talk first about why utilization was a problem without virtualization. Let's talk about old architecture for a minute, and then we can talk about, what might be the benefits of a new architecture if done correctly.

Historically, in the enterprise you would get somewhere between 15 and 18 percent utilization for server applications. So, there are lots of cycles available on a machine and you may have two machines running side-by-side, running two very different workloads, whose cycles are very different. Yet, people wouldn't run multiple applications on the same server setup in most cases, because of the lack of isolation when you are sharing processes in the operating system on the server. Very often, these things would conflict with one another.

During maintenance, maintenance required for one would conflict with the other. It's just a very challenging architecture to try to run multiple things on the same physical, logical host. Virtualization provides isolation for applications running their own logical server, their own virtual server.

So, you could put multiples of them on the same physical host and you get much higher utilization. You'll see folks getting on the order of 50, 70, or 80 percent utilization without any of the worries about the conflicts that used to arise when you tried to run multiple applications sharing processes on the same physical host with an operating system.

That's the architecture we're evolving towards, but if you think about it, Dana, what virtualization gives you from a business perspective, other than utilization is an opportunity to decouple the definition of the application from the system that it runs on.

Historically, you would install an application onto the physical host with the operating system on it. Then, you would work with it and massage it to get it right for that application. Now, you can do all that work independent of the physical host, and then, at run-time, you can decide where you have capacity that best meets needs of the profile of this application.

Most folks have simply gone down the road of creating a virtual machine (VM) with their typical, physical-host approach, and then doing a snapshot, saying, "Okay, now I worry about where to deploy this."

In many cases, they get locked into the hypervisor or the type of virtualization they may have done for that application. If they were to back up one or two steps and say, “Boy, this really does give me an opportunity to define this application in a way that if I wanted to run it on Amazon's EC2, I probably could, but I could also run it my own data center.”

Now, I can begin sourcing infrastructure a little more dynamically, based upon the load that I see. Maybe I can spend less on the capital associated with my own datacenter, because with my application defined as this independent unit, separate from the physical infrastructure I'll be able to buy infrastructure on demand from Amazon, Rackspace, GoGrid, these folks who are now offering up these virtualized clouds of servers.

Gardner: I see. So, we need to rethink the application, so that we can run that application on a variety of these new sourcing options that have arisen, be they on premises, off premises, or perhaps with a hybrid.

Marshall: I think it will be a hybrid, Dana. I think for very small companies, who don't even have the capital option of putting up a data center, they will go straight to an on-demand cloud-type approach. But, for the enterprise that is going to be invested in the data center anyway at some level, they simply get an opportunity to right-size that infrastructure, based upon the profile of applications that really need to be run internally, whether for security, latency, data-sensitivity, or whatever reason.

But, they'll have the option for things that are portable, as it relates to their security, performance, profile, as it relates to the nature of the workload, to make them portable. We saw this very same thing with Linux adoption post 9/11. The things that could be moved off of Solaris easily were moved off. Some things were hard to move, and they didn't move them. It didn't make sense to move them, because it cost too much to move them.

I think we're going to see the same sort of hybrid approach take hold. Enterprise folks will say, “Look, why do I need to own the servers associated with doing the monthly analysis of the log files associated with access to this database for a compliance reason. And, then the rest of the month, that server just sits idle. "Why do I want to do that for that type of workload? Why do I want to own the capacity and have it be captive for that type of workload?"

That would be a perfect example of a workload where it says, I am going to crunch those logs once a month up on Amazon or Rackspace or some place like that, and I am going to pay for a day-and-a-half of capacity and then I am going to turn it off.

Gardner: So, there's going to be a decision process inside each organization, probably quite specific to each organization, about which applications should be hosted in which ways. That might include internal and external sourcing options. But, to be able to do that you have to approach these applications thoughtfully, and you also have to create your new applications. With this multi-dimensional hosting possibility set, if you will, it might. What steps need to be taken at the application level for both the existing and the newer apps?

Marshall: For the existing applications, you don't want to face a situation, in terms of looking at the cloud you might use, that you have to rewrite your code. This is a challenge that the folks that are facing with things such as Google's App Engine or even Saleforce's Force.com. With that approach, it's really a platform, as opposed to an on-demand infrastructure. By a platform I mean there is a set of development tools and a set of application-language expectations that you use in order to take advantage of that platform.

For legacy applications, there's not going to be much opportunity. For those folks, I really don't believe they'll consider, "Gee, I'll get so much benefit out of Salesforce, I'll get so much benefit out of Google, that I'm going to rewrite this code in order to run it on those platforms.

They may actually consider them for new applications that would get some level of benefit by being close to other services that perhaps Google, or for that matter, Salesforce.com might offer. But, for their existing applications, which are mostly what we are talking about here, they won't have of an opportunity to consider those. Instead, they'll look at things such as Amazon's Elastic Compute Cloud, and things that would be offered by a GoGrid or Rackspace, folks in that sort of space.

The considerations for them are going to be, number one, right now the easiest way to run these things in those environments is that it has to be an x86 architecture. There is no PA-RISC or SPARC or IBM's Power architecture there. They don't exist there, so A, it's got to be x86.

And B, the most prevalent cases of applications running in these spaces are run on Linux. The biggest communities of use and biggest communities of support are going to be around Linux. There have been some new enhancements around Microsoft on Amazon. Some of these folks, such as GoGrid, Rackspace, and others, have offered Windows hosting. But here's the challenge with those approaches.

For example, if I were to use Microsoft on Amazon, what I'm doing is booting a Microsoft Amazon Machine Image (AMI), an operating system AMI on Amazon. Then I'm installing my application up there in some fashion. I'm configuring it to make it work for me, and then I'm saving it up there.

The challenge with that is that all that work you just went through to get that application tested, embedded, and running up there on Amazon in the Microsoft configuration that Amazon is supporting is only useful on Amazon.

So, a real consideration for all these folks who are looking at potentially using cloud are saying, "How is it that I can define my application as a working unit, and then be able to choose between Amazon or my internal architecture that perhaps has a VMware basis, or a Rackspace, GoGrid, or BlueLock offering?" You're not going to be able to do that, if you define your cloud application as running on Windows and Amazon, because that Amazon AMI is not portable to any of these other places.

Gardner: Portability is a huge part of what people are looking for.

Marshall: Yes. A big consideration is: are you comfortable with Linux technology or other related open-source infrastructure, which has a licensing approach that's going to enable it to truly be portable for you. And, by the way, you don't really want to spend the money for a perpetual license to Windows, for example, even if you could take your Windows up to Amazon.

Taking your own copy of Windows up there isn't possible now. It may be possible in the future, and I think Microsoft will eventually have a business, whereby they license, in an on-demand fashion, the operating system as a hosting unit to be bound to an application, instead of an infrastructure, but they don't do that now.

So, another big consideration for these enterprises now is do I have workloads that I'm comfortable running on Linux right now, so that I can take a step forward and bind Linux to the workload in order to take it to where I want it to go.

Gardner: Tell us a little bit about what rPath brings to the equation?

Marshall: rPath brings a capability around defining applications as virtual machines (VMs), going through a process whereby you release those VMs to run on whichever cloud of your choosing, whether a hypervisor virtualized cloud of machines, such as what's provided by Amazon, or what you can build internally using Citrix XenSource or something like VMware's virtual infrastructure.

It then provides an infrastructure for managing those VMs through their lifecycle for things such as updates for backup and for configuration of certain services on the machines in a way that's optimized to run a virtualized cloud of systems. We specialize in optimizing applications to run as VMs on a cloud or virtualized infrastructure.

Gardner: It seems to me that that management is essential in order not to just spin out of control and become too complex with too many instances, and with difficulty in managing the virtual environments, even more so than the physical one.

Marshall: It's the lack of friction in being able to quickly deploy a virtualized environment, versus the amount of friction you have in deploying a physical environment. When I say "friction," I mean literally. With a physical environment somebody is going to go grab a server, slam it in a rack, hook up power networking, and allocate it to your account somehow. There is just a lot of friction in procuring, acquiring, and making that capacity available.

In the virtualized world, if someone has already deployed the capital, the physical capital, they can give you access to the virtual capital, the VM, very, very quickly. It's a very quick step to give you that access, but that's a double-edged sword. The reason I say it's a double-edged sword is because if it's really easy to get, people might take more. They might need more already, and they've been constrained by the friction in the process. But, taking more also means you've got to manage more.

You run the risk, if you're not careful. If you make it easy, low friction and low cost, for people to get machines, they will acquire the machine capacity, they will deploy the machine capacity, they will use the machine capacity, but then they will be faced with managing a much larger set of machine capacity than maybe they were comfortable with.

If you don't think about how to make these VMs more manageable than the physical machines to begin with, that friction can be the beginning of a very slippery slop toward a lack of manageability and risk associated with security issues that you can't get your arms around, just because of how broadly these things are deployed.

It can lead to a lot of excess spending, because you are deploying machines that you thought would be temporary, but you never take them back down because, perhaps, it was too difficult to get them configured correctly the first time. So, there are lots of challenges that this lack of friction brings into play that the physical world sort of kept a damper on, because there was only so much capacity you could get.

Gardner: It seems that having a set policy at some level of automation needs to be brought to the table here, and something that will cross between applications and operations in management and something that they can both understand. The old system of just handing things off, without really any kind of a lifecycle approach, simply won't hold up.

Marshall: There are a couple of considerations here. With these things being available as services outside of the IT organization, the IT organization has to be very careful that they find a way to embrace this with their lines of business. If they don't, if they say no to the line-of-business guys, the line-of-business guys are just going to go swipe a credit card on Amazon and say, "I'll show you what no looks like. I will go get my own capacity, I don't need you anymore."

We actually saw some of this with software as a service (SaaS), and it was a very tense negotiation for some time. With SaaS it typically began with the head of sales, who went into the CEO's office, and said, "You know what? I've had it with the CIO, who is telling me I can't have the sales-force automation that I need, because we don't have the capacity or it's going to take years, when I know, I can go turn it on with Salesforce.com right now."

And do you know what the CEO said? The CEO said, “Yes, go turn it on.” And he told the CIO, "Sit down. You're going have to figure out a way to integrate what's going on with Salesforce.com with what we're doing internally, because I am not going to have my sales force constrained."

You're going to see the same thing with the line-of-business guys as it relates to these services being provided. Some smart guy inside Goldman Sachs is going to say, "Look, if I could run 200 Monte Carlo simulation servers over the next two days, we'd have an opportunity to trade in the commodities market. And, I'm being told that I can't have the capacity from IT. Well, that capacity on Amazon is only going to cost me $1,000. I'm taking it, I'm trading, and we're going to make some money for the firm."

What's the CEO going to say? The CEO isn't going to say no. So, the folks in the IT organization have to embrace this and say, "I'll tell you what. If you are going to do this, let me help you do it in a way that takes risk out for the organization. Let me give you an approach that allows you to have this friction-free access, the infrastructure, while also preserving some of the risk, mitigation practices and some of the control practices that we have. Let me help you to find how you are going to use it."

There really is an opportunity for the CIO to say, "Yes, we're going to give you a way to do this, but we are going to do it in a way that it's optimized to take advantage of some of the things we have learned about governance and best practices in terms of deploying applications to an operational IT facility."

Gardner: So, with policy and management, in essence, the control point for the relationship between the applications, perhaps even the relationship between the line-of-business people and the IT folks, needs to be considered with the applications themselves. It seems to me that you need to build them for this new type of management, policy, and governance capability?

Marshall: The IT organization is going to need to take a look at what they've historically done with this air-gap between applications and operations. I describe it as an air-gap, because typically you had this approach, where an application was unit-test complete. Then, it went through a testing matrix -- a gauntlet, if you will -- to go from Dev/Test/QA to production.

There was a set of policies that were largely ingrained in the mind of the release engineers, the build masters, and the folks who were responsible for running it through its paces to get it there. Sometimes, there was some sort of exception process for using certain features that maybe hadn't been approved in production yet. There's an opportunity now to have that process become streamlined by using a system. We've built one, and we've got one that they can codify and put these processes into, if you will, a build system for VMs and have the policies be enforced at the build time so that you are constructing for compliance.

With our technology, we enforce a set of policies that we learned were best practices during our days at Red Hat when constructing an operating system. We've got some 50 to 60 policies that get enforced at build time, when you are building the VM. They're things like don't allow any dangling symlinks, and closing the dependency loop around all of the binary packages to get included. There could be other more corporate-specific policies that need to be included, and you would write those policies into the build system in order to build these VMs.

It's very similar to the way you put policies into your application lifecycle management (ALM) build system when you were building the application binary. You would enforce policy at build time to build the binary. We're simply suggesting that you extend that discipline of ALM to include policies associated with building VMs. There's a real opportunity here to close the gap between applications and operations by having much of what is typically been done in installing an application and taking it through Dev, QA and Test, and having that be part of an automated build system for creating VMs.

Gardner: All right. So, we're really talking about enterprise application's virtualization, but doing it properly, doing it with a lifecycle. This provides an on- ramp to cloud computing and the ability to pick and choose the right hosting and and/or hybrid approaches as these become available.

But we still come back to this tension between the application and the virtual machine. The application traditionally is on the update side and the virtual machine traditionally on the operations, the runtime, and the deployment side.

So we're really talking about trying to get a peanut-butter cup here. It's Halloween, so we can get some candy talk in. We've got peanut butter and chocolate. How do we bring them together?

Marshall: Dana, what you just described exists because people are still thinking about the operating system as something that they bind to the infrastructure. In this case, they're binding the operating system to the hypervisor and then installing the application on top of it. If the hypervisor is now this bottom layer, and if it provides all the management utilities associated with managing the physical infrastructure, you now get an opportunity to rethink the operating system as something that you bind to the application.

Marshall: I'll give you a story from the financial services industry. I met with an architect who had set up a capability for their lines of business to acquire VMs as part of a provisioning process that allows them to go to a Web page, put in an account number for their line of business, request an environment -- a Linux/Java environment or a Microsoft .NET environment -- and within an hour or so they will get an e-mail back saying, "Your environment or your VMs are available. Here are the host names."

They can then log on to those machines, and a decentralized IT service charges the lines of business based upon the days, weeks, or months they used the machine.

I said, "Well, that's very clever. That's a great step in the right direction." Then, I asked, and “How many of these do you have deployed?" And he said, “Oh, we've got about 1,500 virtual machines deployed over the first nine months.” I said, “Why did you do this to begin with?”

And, he said, “We did it to begin with, because people always requested more than they needed, because they knew they would have to grow. So, they go ahead and procure the machines well ahead of their actual need for the processing power of the machine. We did this so that we feel confident that they could procure extra capacity on-demand, as needed by the group.”

I said, “Well, you know, I'd be interested in this statistic around the other side of that challenge. You want them to procure only what they need, but you want them to give back what they don't need as well.” He kind of looked at me funny, and I said, “Well, what do the statistics look back on the getbacks? I mean, how many machines have you ever gotten back?”

And, he said, “Not a one ever. We've never gotten a single machine back ever.” I said, “Why do you think that it is?” He said, “I don't know and I don't care. I charge them for what they're using.”

I said, “Did you ever stop to think that maybe the reason they're not giving them back is because of the time from when you give them the machine to the time that it's actually operational for them? In other words, what it takes them to install the application, to configure all the system services, to make the application sort of tuned and productive on that host -- that sort of generic host that you gave them. Did you ever think that maybe the reason they are not giving it back is because if they had to go through that again, that would be real pain in the neck?"

So, I asked him, I said, “What's the primary application you are running here anyway?” He said, “Well, 900 of these systems are tick data, Reuters' Ticker Tape data." I said, “That's not even useful on the weekends. Why don't they just give them all back on the weekends and you shut down a big hunk of the datacenter and save on power and cooling?” He said, “I haven’t even thought about it and I don't care, because it's not my problem.”

Gardner: Well it's something of an awfully wasteful approach, where supply and demand are in no way aligned. The days of being able to overlook those wasteful practices are pretty much over, right?

Marshall: There's an opportunity now, if they would think about this problem and say, “Hey. Why am I giving them this, let's say, this Linux Java environment and then having them run through a gauntlet to make it work for every machine, instead of them taking an approach where they define, based upon a system and some policies I have given them, they actually attach the operating system and they configure all of this stuff independent of the production environment. Then, at run-time these things get deployed and are actually productive in a matter or minutes, instead of hours, days, and months.

In that way, they feel comfortable giving me the capacity back, when they are not using it, because they know that they can quickly get the application up and running in the manner it should be configured to run very, very quickly in a very scalable way, in a very elastic way.

That elasticity benefit has been overlooked to date, but it's a benefit that's going to become very important as people do exactly what you just described, which is become sensitive to the notion that a VM idling out there and consuming space is just as bad as a physical machine idling out there and consuming space.

Gardner: I certainly appreciate the problem, the solution set, and the opportunity for significant savings and agility. That's to say, you can move your applications, get them up fast, but you will also, in the long-term, be able to cut your overall cost because of the utilization and using the elasticity to match your supply and demand as closely as possible. The question then is how to get started. How do you move to take advantage of these? Tell us a little bit more about the role that rPath plays in facilitating that.

Marshall: The first thing to do, Dana, is to profile your applications and determine which ones have sort of lumpy demand, because you don't want to work on something that needs to be available all the time and has pretty even demand. Let's go for something that really has lumpy demand, so that we can do the scale-up and give back and get some real value out of it.

So, the first thing to do is an inventory of your applications and say, “What do I have out here that has lumpy demand?” Pick a couple of candidates. Ideally, it's going to be hard to do this without running Linux. It needs to be a workload that will run on Linux, whether you have run it on Linux historically or not. Probably, it needs to be something written in Java, C, C++, Python, Perl, or Ruby -- something that you can move to a Linux platform -- something, that has lumpy demand.

The first step that we get involved in is packaging that application as an application that's optimized to be a VM and to run in a VM. One of rPath’s values here is that the operating system becomes optimized to the application, and the footprint of the operating system and therefore it's management burden, shrinks by about 90 percent.

When you bind an operating system to an application, you're able to eliminate anything that is not relevant to that application. Typically, we see a surface area shrinking to about 10 percent of what is typically deployed as a standard operating system. So, the first thing is to package the application in a way that is optimized to run in a VM. We offer a product called rBuilder that enables just that functionality.

The second, is to determine whether you're going to run this internally on some sort of virtualized infrastructure that you've have made available in my infrastructure through VMware, Xen, or even Microsoft Hyper-V for that matter, or are you going to use an external provider?”

We suggest that when you get started with this set, as soon as possible, you should begin experimenting with an external provider. The reason for that is so that you don't put in place a bunch of crutches that are only going to be relevant to your environment and will prevent the application from ever going external. You can never drop the crutches that are associated with your own hand-holding processes that can only happen inside of your organization.

We strongly suggest that one of the first things you do, as you do this proof of concept, is actually do it on Amazon or another provider that offers a virtualized infrastructure. Use an external provider, so that you can prove to yourself that you can define an application and have it be ready to run on an infrastructure that you don't control, because that means that you defined the application truly independent of the infrastructure.

Gardner: And, that puts you in a position where eventually you could run that application on your local cloud or virtualized environment and then, for those lumpy periods when you need that exterior scale and capacity, you might just look to that cloud provider to support that application in that fashion.

Marshall: That's exactly right, whereas, if you prove all this out internally only, you may come across a huge "oops" that you didn't even think about, as you try to move it externally. You may find that you've driven yourself down in architectural box canyon that you just can't get out of.

So, we strongly suggest to folks that you experiment with this proof of concept, using an external, and then bring it back internally and prove that you can run it internally, after you've proven that you can run it externally.

Gardner: Your capital cost for that are pretty meager or nothing, and then your operating cost will benefit in the long run, because you will have those hybrid options.

Marshall: Another benefit of starting external for one of these things is that the cost at the margin for doing this is so cheap. It's between 10 and 50 cents per CPU hour to set up the Amazon environment and to run it, and if you run it for an hour you pay the 10 cents, it's not like you have to commit to some pre-buy or some amount of infrastructure. It's truly on demand. What you really use is what you pay for. So, there's no reason from a cost perspective not to look at running your first instance, of an on-demand, virtualized application externally.

Gardner: And, if you do it in this fashion, you're able to have that portability. You can take it in, and you can put it out. You've built it for that and there is no hurdle you have to overcome for that portability.

Marshall: If you prove to yourself that you can do it, that you can run it in both places, you've architected correctly. There's a trap here. If you become dependent on something associated with a particular infrastructure set or a particular hypervisor, you preclude any use in the future of things that don't have that hypervisor involved.

Gardner: Another thing that people like about the idea of virtualizing applications is that you get a single image of the application. You can patch it, manage it, upgrade it, and that is done once, and it doesn't have to be delivered out to a myriad of machines, with configuration issues and so forth. Is that the case in this hybrid environment, as well, or you can have this single image for the amount of capacity you need locally, and then for that extra capacity at those peak times, from an external cloud?

Marshall: I think you've got to be careful here, because I don't believe that one approach is going to work in every case. I'll give you an example. I was meeting with a different financial services firm who said, “Look, of our biggest application, we've got -- I think it was 1,500 or 2,000 -- instances of that application running." And he said, “I'm not going to flood the network with 1,500 new machines, when I have to make changes to that.” So, we are going to upgrade those VMs in place.

We're going to have each one of them access some sort of lifecycle management capability. That's another benefit we provide and we provide benefits in two ways. One, we've got a very elegant system for delivering maintenance and updates to a running system. And two, since you've only got 10 percent of the operating system there you're patching one-tenth as often, because operating system is typically the catalyst for most of the patching associated with security issues and other things.

I think there are going to be two things happening here. People are going to maintain these releases of applications as VMs, which you may want to think of as a repository of available application VMs that are in a known good state, and that are up-to-date and things like that.

And in some cases whenever new demand needs to come on line the known good state is going to be deployed and they won't deploy it and then patch it after they deploy it. It will be deployed and it won't need patching. But at the same time, there will be deployed units that are running that they will want to patch, and they need to be able to do that without having to distribute, dump the data, backup the data, kill the image, bring a new image up and then reload the data.

In many cases, you're going to want to see these folks actually be able to patch in place as well. The beauty of it is, you don't have to choose. They can be both. It doesn't have to be one or the other.

Gardner: So that brings us back to the notion of good management, policies, governance, and automation, because of this lifecycle. It's not simply a matter of putting that application up, and getting some productivity from utilization, but it's considering this entire sunrise-to-sunset approach as well.

Marshall: Right, and that also involves having the ability to do some high-quality scaling on-demand to be able to call an API to add a new system and to be able to do that elegantly, without someone having to log into the system and thrash around configuring it to make it aware of the environment that it's supposed to be supporting.

There are quite a few considerations here, when you're defining applications as VMs, and you are defining them independent of where they run, you are not going to use any crutches associated with your internal infrastructure to be able to elastically scale up and scale back.

There are some interesting new problems that come up here that also are new opportunities to do things better. This whole notion of architecting in a way that is A, optimized for virtualization. In other words, if you are going to make it easy to get extra machines, you'd better make machines easy to manage, and you'd better make them manageable on the hypervisor that they are running on. And B, you need to have a way to add capacity in an elegant way that doesn't require folks logging in and doing a lot of manual work in order to be able to scale these things up.

Gardner: And then, in order to adopt a path to cloud benefits, you just start thinking about the steps across virtualization, thinking a bit more holistically about the virtualized environment and applications as being one and the same. The level of experimentation gives you the benefits, and ultimately you'll be building a real fabric and a governed best methods approach to cloud computing.

Marshall: The real opportunity here is to separate the application-virtualization approach from the actual virtualization technology to avoid the lock-in, the lack of choice, and the lack of the elasticity that cloud computing promises. If you do it right, and if you think about application virtualization as an approach that frees your application from the infrastructure, there is a ton of benefit in terms of dynamic business capability that is going to be available to your organization.

Gardner: Well, great. I just want to make sure that we covered that entire stepping process into adoption and use. Did we leave anything out?

Marshall: What we didn't talk about was what should be possible at the end of the day.

Gardner: What's that gold ring out there that you want to be chasing after?

Marshall: Nirvana would look like something that we call a "hyper cloud concept," where you are actually sourcing demand by the day or hour, based upon service level experience, performance experience, and security experience with some sort of intelligent system analyzing the state of your applications and the demand for those applications and autonomically acquiring capacity and putting that capacity in place for your applications across multiple different providers.

Again, it's based upon the set of experiences that you cataloged around what's the security profile that these guys provide? What's the performance profile that they provide? And, what's the price profile that they provide.

Ultimately, you should have a handful of providers out there that you are sourcing your applications against and sourcing them day-by-day, based upon the needs of your organization and the evolving capabilities of these providers. And, that's going to be a while.

In the near term, people will choose one or two cloud providers and they will develop a rapport on a comfortable level. If they do this right, over time they will be able to get the best price and the best performance, because they will never be in a situation where they can't bring it back and put it somewhere else. That's what we call the hyper cloud approach. It's a ways off, it's going to take some time, but I think it's possible.

Gardner: The nice thing about it is that your business outcomes are your start and your finish point. In many cases today, your business outcomes are, in some ways, hostage to whatever the platform in IT requirements are, and then that's become a problem.

Marshall: Right. It can be.

Gardner: Well, terrific. We've been talking about cloud computing and proper on-ramps to approach and use clouds, and also how enterprises can best prepare to bring their applications into a virtual development and deployment environment.

We've been joined by Billy Marshall, a founder and chief strategy officer at rPath. I certainly appreciate your time, Billy.

Marshall: Dana, it's been a pleasure, thanks for the conversation.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Thanks, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: rPath.

Transcript of BriefingsDirect podcast on virtualized applications development and deployment strategies as on-ramp to cloud computing. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.