Thursday, October 18, 2012

SOA Provides Needed Support for Enterprise Architecture in Cloud, Mobile, Big Data, Says Open Group Panel

Transcript of a BriefingsDirect panel discussion on how SOA principles are becoming cheaper and easier to implement as enterprises move to the cloud.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the resurgent role of service-oriented architecture (SOA) and how its benefits are being revisited as practical and relevant in the cloud, mobile, and big-data era.

We've gathered an international panel of experts to explore the concept of "architecture is destiny," especially when it comes to hybrid services delivery and management. We'll see how SOA is proving instrumental in allowing the needed advancements over highly distributed services and data, when it comes to scale, heterogeneity support, and governance.

Here to share his insights on the back-to-the-future role and practicality of SOA is Chris Harding, Director of Interoperability at The Open Group. He's based in the UK. Welcome, Chris. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Chris Harding: Hi, Dana. It's great to be on this panel.

Gardner: We're also here with Nikhil Kumar, President of Applied Technology Solutions and Co-Chair of the SOA Reference Architecture Projects within The Open Group, and he is based in Michigan.

Nikhil Kumar: Hello, Dana. I'm looking forward to being on the panel and participating.

Gardner: We're also here with Mats Gejnevall, Enterprise Architect at Capgemini and Co-Chair of The Open Group SOA Work Group, and he's based in Sweden. Thanks for joining us, Mats.

Mats Gejnevall: Thanks, Dana.

Gardner: All right, Chris, tell me little bit about this resurgence that we've all been noticing in the interest around SOA.

Harding: My observation is from a slightly indirect perspective. My role in The Open Group is to support the work of our members on SOA, cloud computing, and other topics. We formed the SOA Work Group back in 2005, when SOA was a real emerging hot topic, and we set up a number of activities and projects. They're all completed.

I was thinking that the SOA Work Group would wind down, move into maintenance mode, and meet once every few months or so, but we still get a fair attendance at our regular web meetings. In fact, we've started two new projects and we're about to start a third one. So from that, as I said, indirect observation, it's very clear that there is still an interest, and indeed a renewed interest, in SOA from the IT community within The Open Group.

Larger trends

Gardner: Nikhil, do you believe that this has to do with some of the larger trends we're seeing in the field, like cloud Software as a Service (SaaS), and hybrid services? From your perspective, is this what's driving this renewal?

Kumar: What I see driving it is three things. One is the advent of the cloud and mobile, which requires a lot of cross-platform delivery of consistent services. The second is emerging technologies, mobile, big data, and the need to be able to look at data across multiple contexts.


The third thing that’s driving it is legacy modernization. A lot of organizations are now a lot more comfortable with SOA concepts. I see it in a number of our customers. I've just been running a large enterprise architecture initiative in a Fortune 500 customer.

At each stage, and at almost every point in that, they're now comfortable. They feel that SOA can provide the ability to rationalize multiple platforms. They're restructuring organizational structures, delivery organizations, as well as targeting their goals around a service-based platform capability.

So legacy modernization is a back-to-the-future kind of thing that has come back and is getting adoption. The way it's being implemented is using RESTful services, as well as SOAP services, which is different from traditional SOA, say from the last version, which was mostly SOAP-driven.

Gardner: Mats, do you think that what's happened is that the marketplace and the requirements have changed and that’s made SOA more relevant? Or has SOA changed to better fit the market? Or perhaps some combination?

Gejnevall: I think that the cloud is really a service delivery platform. Companies discover that to be able to use the cloud services, the SaaS things, they need to look at SOA as their internal development way of doing things as well. They understand they need to do the architecture internally, and if they're going to use lots of external cloud services, you might as well use SOA to do that.

Also, if you look at the cloud suppliers, they also need to do their architecture in some way and SOA probably is a good vehicle for them. They can use that paradigm and also deliver what the customer wants in a well-designed SOA environment.

Gardner: Let's drill down on the requirements around the cloud and some of the key components of SOA. We're certainly seeing, as you mentioned, the need for cross support for legacy, cloud types of services, and using a variety of protocol, transports, and integration types. We already heard about REST for lightweight approaches and, of course, there will still be the need for object brokering and some of the more traditional enterprise integration approaches.

This really does sound like the job for an Enterprise Service Bus (ESB). So let's go around the panel and look at this notion of an ESB. Some people, a few years back, didn’t think it was necessary or a requirement for SOA, but it certainly sounds like it's the right type of functionality for the job. Do you agree with that, Chris?

Loosely coupled

Harding: I believe so, but maybe we ought to consider that in the cloud context, you're not just talking about within a single enterprise. You're talking about a much more loosely coupled, distributed environment, and the ESB concept needs to take account of that in the cloud context.

Gardner: Nikhil, any thoughts about how to manage this integration requirement around the modern SOA environment and whether ESBs are more or less relevant as a result?

Kumar: In the context of a cloud we really see SOA and the concept of service contracts coming to the fore. In that scenario, ESBs play a role as a broker within the enterprise. When we talk about the interaction across cloud-service providers and cloud consumers, what we're seeing is that the service provider has his own concept of an ESB within its own internal context.

If you want your cloud services to be really reusable, the concept of the ESB then becomes more for the routing and the mediation of those services, once they're provided to the consumer. There's a kind of separation of concerns between the concept of a traditional ESB and a cloud ESB, if you want to call it that.

The cloud context involves more of the need to be able to support, enforce, and apply governance concepts and audit concepts, the capabilities to ensure that the interaction meets quality of service guarantees. That's a little different from the concept that drove traditional ESBs.

That’s why you're seeing API management platforms like Layer 7, Mashery, or Apigee and other kind of product lines. They're also coming into the picture, driven by the need to be able to support the way cloud providers are provisioning their services. As Chris put it, you're looking beyond the enterprise. Who owns it? That’s where the role of the ESB is different from the traditional concept.

Gardner: Do you think there is a security angle to that as well or at least access and privilege types of controls?

Kumar: Absolutely. Most cloud platforms have cost factors associated with locality. If you have truly global enterprises and services, you need to factor in the ability to deal with safe harbor issues and you need to factor in variations and law in terms of security governance.

The platforms that are evolving are starting to provide this out of the box. The service consumer or a service provider needs to be able to support those. That's going to become the role of their ESB in the future, to be able to consume a service, to be able to assert this quality-of-service guarantee, and manage constraints or data-in-flight and data-at-rest.

Gardner: Mats, it sounds as if the ESB, as Nikhil is describing it, would be more of an intermediary between the internal organization and external services. Does that jibe with what you're seeing in the market, or are there other aspects of the concept of ESB that are now relevant to the cloud?

Entire stack

Gejnevall: One of the reasons SOA didn’t really take off in many organizations three, four, or five years ago was the need to buy the entire stack of SOA products that all the consultancies were asking companies to buy, wanting them to buy an ESB, governance tools, business process management tools, and a lot of sort of quite large investments to just get your foot into the door of doing SOA.

These days you can buy that kind of stuff. You can buy the entire stack in the cloud and start playing with it. I did some searches on it today and I found a company that you can play with the entire stack, including business tools and everything like that, for zero dollars. Then you can grow and use more and more of it in your business, but you can start to see if this is something for you.

In the past, the suppliers or the consultants told you that you could do it. You couldn’t really try it out yourself. You needed both the software and the hardware in place. The money to get started is much lower today. That's another reason people might be thinking about it these days.

Gardner: It sounds as if there's a new type of on-ramp to SOA values, and the componentry that supports SOA is still being delivered as a service. On top of that, you're also able to consume it in a pay-as-you-go manner. Do you agree, Chris Harding, that there's a new type of on-ramp to SOA now that might be part of this resurgence?

Harding: That's a very good point, but there are two contradictory trends we are seeing here. One is the kind of trend that Mats is describing, where the technology you need to handle a complex stack is becoming readily available in the cloud.
One of the reasons SOA didn’t really take off in many organizations three, four, or five years ago was the need to buy the entire stack of SOA products

And the other is the trend that Nikhil mentioned: to go for a simpler style, which a lot of people term REST, for accessing services. It will be interesting to see how those two tendencies play out against each other.

Kumar: I'd like to make a comment on that. The approach for the on-ramp is really one of the key differentiators of the cloud, because you have the agility and the lack of capital investment (CAPEX) required to test things out.

But as we are evolving with cloud platforms, I'm also seeing with a lot of Platform-as-a-Service (PaaS) vendor scenarios that they're trying the ESB in the stack itself. They're providing it in their cloud fabric. A couple of large players have already done that.

Gardner: I guess we could rethink that as Integration as a Service. Does that make sense?

Kumar: Yes. For example, Azure provides that in the forward-looking vision. I am sure IBM and Oracle have already started down that path. A lot of the players are going to provide it as a core capability.

Pre-integrated environment

Gejnevall: Another interesting thing is that they could get a whole environment that's pre-integrated. Usually, when you buy these things from a vendor, a lot of times they don't fit together that well. Now, there’s an effort to make them work together.

But some people put these open-source tools together. Some people have done that and put them out on the cloud, which gives them a pretty cheap platform for themselves. Then, they can sell it at a reasonable price, because of the integration of all these things.

Gardner: There seem to be a couple of different approaches in the market. One would be the à-la-carte approach, perhaps most popularized by Amazon Web Services (AWS), where you can just get discrete Infrastructure-as-a-Service (IaaS) componentry or granular approaches.

There's also the move toward a fuller stack of integrated services that would work in total, perhaps even across a lifecycle of software, from development, to deployment, to advancement, into integration and process.

Any thoughts from the panel on these two approaches? Will there be more à la carte or more integration? I guess it depends on the organization how they want to consume this. Chris?

Harding: There are two different approaches for the architect to choose between. You can go for the basic IaaS from Amazon. You can put stack onto it. Maybe you can get open-source products and put them onto that stack. That will give you the kind of platform on which you're going to deploy your services.
You need to make sure that the stuff that you're using out there are things that you can actually bring home and use at home as well.

Or you can go for PaaS with a platform ready there and integrate it. If you go for the PaaS already there and integrate it, then you should watch out for how far you're locked into that particular cloud provider, because you're using special services in that platform.

Gejnevall: It's an important issue there, because what happens if you buy the whole stack in the cloud somewhere? It's done by very specific tools that you can't move into your own environment later on, and that cloud supplier goes under, and suddenly you're in a pretty bad shape. You need to make sure that the stuff that you're using out there are things that you can actually bring home and use at home as well.

Gardner: Nikhil, it sounds as if the cloud model might be evolving toward what is all-inclusive, at least a lot of people would like to provide that. But SOA, I think by its nature and its definition, advances an ocean of interoperability, and being able to plug and play across existing, current, and then future sets of service possibilities. Are we talking about SOA being an important element of keeping clouds dynamic and flexible?

Kumar: We can think about the OSI 7 Layer Model. We're evolving in terms of complexity, right? So from an interoperability perspective, we may talk SOAP or REST, for example, but the interaction with AWS, Salesforce, SmartCloud, or Azure would involve using APIs that each of these platforms provide for interaction.

Lock-in

So you could have an AMI, which is an image on the Amazon Web Services environment, for example, and that could support a lab stack or an open source stack. How you interact with it, how you monitor it, how you cluster it, all of those aspects now start factoring in specific APIs, and so that's the lock-in.

From an architect’s perspective, I look at it as we need to support proper separation of concerns, and that's part of [The Open Group] SOA Reference Architecture. That's what we tried to do, to be able to support implementation architectures that support that separation of concerns.

There's another factor that we need to understand from the context of the cloud, especially for mid-to-large sized organizations, and that is that the cloud service providers, especially the large ones -- Amazon, Microsoft, IBM -- encapsulate infrastructure.

If you were to go to Amazon, Microsoft, or IBM and use their IaaS networking capabilities, you'd have one of the largest WAN networks in the world, and you wouldn’t have to pay a dime to establish that infrastructure. Not in terms of the cost of the infrastructure, not in terms of the capabilities required, nothing. So that's an advantage that the cloud is bringing, which I think is going to be very compelling.

The other thing is that, from an SOA context, you're now able to look at it and say, "Well, I'm dealing with the cloud, and what all these providers are doing is make it seamless, whether you're dealing with the cloud or on-premise." That's an important concept.
From an SOA perspective, the cloud has become very compelling.

Now, each of these providers and different aspects of their stacks are at significantly different levels of maturity. Many of these providers may find that their stacks do not interoperate with themselves either, within their own stacks, just because they're using different run times, different implementations, etc. That's another factor to take in.

From an SOA perspective, the cloud has become very compelling, because I'm dealing, let's say, with a Salesforce.com and I want to use that same service within the enterprise, let's say, an insurance capability for Microsoft Dynamics or for SugarCRM. If that capability is exposed to one source of truth in the enterprise, you've now reduced the complexity and have the ability to adopt different cloud platforms.

What we are going to start seeing is that the cloud is going to shift from being just one à-la-carte solution for everybody. It's going to become something similar to what we used to deal with in the enterprise context. You had multiple applications, which you service-enabled to reduce complexity and provide one service-based capability, instead of an application-centered approach.

You're now going to move the context to the cloud, to your multiple cloud solutions, and maybe many implementations in a nontrivial environment for the same business capability, but they are now exposed to services in the enterprise SOA. You could have Salesforce. You could have Amazon. You could have an IBM implementation. And you could pick and choose the source of truth and share it.

So a lot of the core SOA concepts will still apply and are still applying.

Gardner: Mats, it sounds that with this vision of a cloud of clouds and increasingly services being how you manage that diversity, getting competency at SOA now will put you in a much better position to be able to exploit and leverage these cloud services as we go forward. Does that make sense?

Governance issue

Gejnevall: Absolutely, but the governance issue pops up here all the time as well, because if you are going to use lots of services out there, you want to have some kind of control. You might want to have a control over your cloud suppliers. You don't want to start up a lot of shadow IT all over your enterprise. You still want to have some kind of control.

An idea that is popping up now is that, instead of giving the business direct access to all these cloud suppliers, you probably have to govern those services and look at governance features. You can measure the usage of all these external SaaS things, and then if you don't like the supplier and you can't negotiate the right price, you just move to another supplier that supplies a similar type of service.

This works fine in SOA and SaaS context, but it's much harder to do that from a PaaS or IaaS. From the SaaS point of view, you really need to get control over those services, because otherwise the business is going to go wild. Then, you buy new stuff all over the place, and suddenly they die out and then the business stops working, and there is no control over that.

Gardner: Chris Harding, another pillar of SOA traditionally has been the use of registry and repositories to help manage some of that chaos that Mats was referring to. We've also seen a lot of interest in the concept of the app store, popularized by Apple with its iOS interfaces and its application buying and managing. Are we seeing a need for app stores in the enterprise that are, in a sense, the registry and repository of SOA?

Harding: The app store concept is coming in, in several forms and it seems to be meeting a number of different needs.
From the SaaS point of view, you really need to get control over those services, because otherwise the business is going to go wild.

Yes, you have the app stores that cloud vendors have to let people pick from their product. You have the government app stores organized to enable government departments to get a good choice of cloud services. In some ways, they're taking over from the idea of the registry and the repository, or doing some of the functions.

In particular, the idea that you used to have of service discovery, of automatically going out and discovering services, is being replaced by the concept of selecting services from app stores. But, of course, there is a fundamental difference between the app store, which is something that you get your service from, and the registry that you keep, which is the registry of the services that you have got from wherever.

Gardner: It does seem important for the governance.

Gejnevall: I also think that the concept of the app stores has taught a lot of business people to use this kind of thinking. I have this huge list of things that I can do within my business. Now with the smartphones, they used to go and search for that and see what kind of stuff can I do with the IP I've got in my business. By providing similar kinds of things to the business people, they can go and search and see that these are other things I can do within my business. You can download them on your laptop, your phone, or whatnot.

That will change the relationship a bit between the business side and the IT side of things.

Another on-ramp

Gardner: Perhaps yet another on-ramp to the use of SOA types of models and thinking, the app store allowing for discovery, socialization of services, but at the same time, providing governance and control, because the organization can decide what app store you use, what apps get in the store, or what app stores are available.

Kumar: I have a few comments on that, because we're seeing that with a lot of our customers, typically the vendors who support PaaS solution associate app store models along with their platform as a mechanism to gain market share.

The issue that you run into with that is, it's okay if it's on your cellphone or on your iPad, your tablet PC, or whatever, but once you start having managed apps, for example Salesforce, or if you have applications which are being deployed on an Azure or on a SmartCloud context, you have high risk scenario. You don't know how well architected that application is. It's just like going and buying an enterprise application.

When you deploy it in the cloud, you really need to understand the cloud PaaS platform for that particular platform to understand the implications in terms of dependencies and cross-dependencies across apps that you have installed. They have real practical implications in terms of maintainability and performance. We've seen that with at least two platforms in the last six months.

Governance becomes extremely important. Because of the low CAPEX implications to the business, the business is very comfortable with going and buying these applications and saying, "We can install X, Y, or Z and it will cost us two months and a few million dollars and we are all set." Or maybe it's a few hundred thousand dollars.
When you deploy it in the cloud, you really need to understand the cloud PaaS platform for that particular platform.

They don't realize the implications in terms of interoperability, performance, and standard architectural quality attributes that can occur. There is a governance aspect from the context of the cloud provisioning of these applications.

There is another aspect to it, which is governance in terms of the run-time, more classic SOA governance, to measure, assert, and to view the cost of these applications in terms of performance to your infrastructural resources, to your security constraints. Also, are there scenarios where the application itself has a dependency on a daisy chain, multiple external applications, to trace the data?

In terms of the context of app stores, they're almost like SaaS with a particular platform in mind. They provide the buyer with certain commitments from the platform manager or the platform provider, such as security. When you buy an app from Apple, there is at least a reputational expectation of security from the vendor.

What you do not always know is if that security is really being provided. There's a risk there for organizations who are exposing mission-critical data to that.

The second thing is there is still very much a place for the classic SOA registries and repositories in the cloud. Only the place is for a different purpose. Those registries and repositories are used either by service providers or by consumers to maintain the list of services they're using internally.

Different paradigms

There are two different paradigms. The app store is a place where I can go and I know that the gas I am going to get is 85 percent ethanol, versus I also have to maintain some basic set of goods at home to make that I have my dinner on time. These are different kind of roles and different kind of purposes they're serving.

Above all, I think the thing that's going to become more and more important in the context of the cloud is that the functionality will be provided by the cloud platform or the app you buy, but the governance will be a major IT responsibility, right from the time of picking the app, to the time of delivering it, to the time of monitoring it.

Gardner: It's a very interesting topic. Chris Harding, tell me a little bit about how The Open Group is allowing architects to better exercise SOA principles, as they're grappling with some of these issues around governance, hybrid services delivery and management, and the use and demand in their organizations to start consuming more cloud services?

Harding: The architect’s primary concern, of course, has to be to meet the needs of the client and to do so in a way that is most effective and that is cost-effective. Cloud gives the architect a usability to go out and get different components much more easily than hitherto.

There is a problem, of course, with integrating them and putting them together. SOA can provide part of the solution to that problem, in that it gives a principle of loosely coupled services. If you didn’t have that when you were trying to integrate different functionality from different places, you would be in a real mess.
The Open Group’s real role is to support the architect and help the architect to better meet the needs of the architect client.

What The Open Group contributes is a set of artifacts that enable the architect to think through how to meet the client’s needs in the best way when working with SOA and cloud.

For example, the SOA Reference Architecture helps the architect understand what components might be brought into the solution. We have the SOA TOGAF Practical Guide, which helps the architect understand how to use TOGAF in the SOA context.

We're working further on artifacts in the cloud space, the Cloud Computing Reference Architecture, a notational language for enabling people to describe cloud ecosystems on recommendations for cloud interoperability and portability. We're also working on recommendations for cloud governance to complement the recommendations for SOA governance, the SOA Governance Framework Standards that we have already produced, and a number of other artifacts.

The Open Group’s real role is to support the architect and help the architect to better meet the needs of the architect client.

Gardner: Very good. And perhaps just quickly Chris, you could fill us in as a recap of some of the SOA activities at your recent Washington D.C. Conference.

New SOA activities

Harding: We're looking at some new SOA activities. In fact, we've started an activity to look at SOA for business technology. From the very early days, SOA was seen as bringing a closer connection between the business and technology. A lot of those promises that were made about SOA seven or eight years ago are only now becoming possible to fulfill, and that business front is what that project is looking at.

We're also producing an update to the SOA Reference Architectures. We have input the SOA Reference Architecture for consideration by the ISO Group that is looking at an International Standard Reference Architecture for SOA and also to the IEEE Group that is looking at an IEEE Standard Reference Architecture.

We hope that both of those groups will want to work along the principles of our SOA Reference Architecture and we intend to produce a new version that incorporates the kind of ideas that they want to bring into the picture.

We're also thinking of setting up an SOA project to look specifically at assistance to architects building SOA into enterprise solutions.

So those are three new initiatives that should result in new Open Group standards and guides to complement, as I have described already, the SOA Reference Architecture, the SOA Governance Framework, the Practical Guides to using TOGAF for SOA.
We're also thinking of setting up an SOA project to look specifically at assistance to architects building SOA into enterprise solutions.

We also have the Service Integration Maturity Model that we need to assess the SOA maturity. We have a standard on service orientation applied to cloud infrastructure, and we have a formal SOA Ontology.

Those are the things The Open Group has in place at present to assist the architect, and we are and will be working on three new things: version 2 of the Reference Architecture for SOA, SOA for business technology, and I believe shortly we'll start on assistance to architects in developing SOA solutions.

Gardner: Very good. I'm afraid we'll have to leave it there. We're about out of time. We've been talking about how SOA is proving instrumental in allowing the needed advancements over highly distributed services and data, especially when it comes to the scale, heterogeneity support, and governance requirements of cloud computing.

Please join me now in thanking our panel. Chris Harding, Director of Interoperability for The Open Group. Thanks so much, Chris.

Harding: Thank you very much, Dana.

Gardner: We're also here with Nikhil Kumar, President of Applied Technology Solutions and Co-Chair of the SOA Reference Architecture Project within The Open Group. Thank you so much.

Kumar: Thank you, Dana.

Gardner: And Mats Gejnevall, Enterprise Architect at Capgemini and Co-Chair of The Open Group SOA Work Group. Thanks, Mats.

Gejnevall: Thanks, Dana. It was an interesting discussion.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks to you also, our audience for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: The Open Group.

Transcript of a BriefingsDirect panel discussion on how SOA principles are becoming cheaper and easier to implement as enterprises move to the cloud. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Monday, October 08, 2012

Banking Services Provider BancVue Leverages VMware Server Virtualization to Generate Private-Cloud Benefits and Increased Business Agility


Transcript of a BriefingsDirect podcast from the 2012 VMworld Conference on how one company has been able to provide business agility to its customers.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the 2012 VMworld Conference in San Francisco.

We're here the week of August 27 to explore the latest in cloud computing and software-defined datacenter infrastructure developments. I'm Dana Gardner, Principal Analyst at Interarbor Solutions and I'll be your host throughout this series of VMware sponsored BriefingsDirect discussions.

Our next user case study examines how server virtualization success can quickly set the stage for private-cloud benefits. We'll hear the powerful story of how banking services provider, BancVue, has been able to provide business agility to its community bank customers, enabling them to better compete against the mega banks on such critical areas as customer service and end-user portal.

Here to share their story on creating the services that empower customers to beat the giants in their field by better leveraging agile IT is Sunny Nair, Vice President of IT and Systems Operations at BancVue in Austin, Texas.

Welcome to BriefingsDirect, Sunny.

Sunny Nair: Thank you.

Gardner: I'm looking at this sort of at the big picture right now. Many companies these days need to tackle the dual task of cutting costs, while also increasing agility and providing better services and response times to their constituents.

At a high level, Sunny, you've been doing this for some time. Tell me if you have a philosophy or a vision for how you can accomplish both, that is to say manage your total cost and increase and improve the services delivery?

Nair: The first thing we wanted to do was to abstract the applications and the operating system from the hardware so that a hardware failure wouldn’t bring down our systems. For that, of course, we went to virtualization. We experimented with various virtualization products. Out of those trials, vSphere was the best software for a heterogeneous environment like ours, where we had Windows and different flavors of Linux.

So we stuck with VMware, and that helped us abstract the hardware layer and our software layer, so we can move our operating systems and our virtual servers to different pieces of hardware, when there was a hardware issue on one server, enabling us to be more agile.

Gardner: How about cost? Did that not only help you support your heterogeneity requirements, but were you able to consolidate, unify, and reduce some of those hardware costs along the way?

Nair: Oh yes, because instead of running just one server on one piece of hardware, we were able to run anywhere between 12 and 20 different servers. All servers weren’t utilized at 100 percent all the time. We were able to leverage the CPU to its full capacity and run many more servers. So we had, at a minimum, a 12x increase in our server capacity on each piece of hardware. That definitely did help our costs.

Gardner: That’s pretty impressive. Before we go any further on your technology benefits, perhaps you could tell us a little bit about BancVue, the type of organization you are, and what some of your business goals are?

Marketing expertise

Nair: BancVue is a financial services software and marketing company. We help community financial institutions compete with mega banks by providing them marketing expertise, software expertise, and data consultation expertise, and all those things require technology and software.

Gardner: Do you supply services to them? That is to say, are they using your applications or services as part of their own ecosystem type of approach?

Nair: Absolutely.

Gardner: Tell me how that works.

Nair: For many of our partners we provide the website that many people land on when they search for the website on the Internet. And we also provide the gateway to their online banking. So it's extremely important for the website to stay up and online.

In addition to that, we also provide rewards checking calculations, interest rate calculations, which customer is qualified for certain products, and so on. We are definitely a part of the ecosystem for the financial institution.

Gardner: Tell me a little bit about the story of adoption. Once you settled on your strategy for virtualizing your workloads and supporting your heterogeneity issues, how did that unfold? And maybe you could point us in a direction where that’s taking you in terms of private-cloud capability?
It was a step-by-step approach of wading deeper into the virtualization world.

Nair: It was a step-by-step approach of wading deeper into the virtualization world. Our first step was just getting that abstraction layer that I was talking about by virtualizing our servers. Then, we looked at it and we said, "Well, from vSphere we can use vMotion and move our virtual servers around. And we can consolidate our storage on a storage-attached network (SAN)." That helped us disengage further from each piece of hardware.

Then, we can look at vCenter Operations Manager and predict when a server is going to run out of capacity. That was one of the areas where we started experimenting, and that proved very fruitful. That experiment was just earlier this year.

Once we did that, we downloaded some trial software with the help of VMware, which is one of the benefits that we found. We didn’t have to pay up immediately. We could see if it suited our needs first.

We used vCloud Director as a trial, and vShield and vCenter Orchestrator together. Once we put all those pieces together, we were able to get the true benefit of virtualization, which is being in a cloud where not only are you abstracted out, but you can also predict when your hardware is going to run out.

You can move to a different data center, if the need happens to be there and just run your server farm like a power utility would run their power station, building out the computing resources necessary for a user or a customer, and then shutting that off when it’s no longer necessary, all within the same hardware grid.

Fit for purpose


Gardner: I suppose it also gets to that point of cutting your total costs, when you can manage that as a fit-for-purpose exercise. It's the Goldilocks approach -- not too much, not too little. That’s especially important, when you have an ecosystem play, where you can’t always predict what your customers are going to be doing or demanding.

Nair: Yes, and that’s true internally as well as externally. We could have our development group ask for a bunch of servers all of a sudden to do some QA, and we've scripted out using the JavaScript system within vCloud Director and vCenter Orchestrator, building machines automatically. We could reduce our cost and our effort in putting those servers online, because we've automated them. Then the vCloud Director could tear them down automatically later.

Gardner: You're using a common private-cloud infrastructure managed through the VMware suite that supports your workloads for development, for QA and test, for your internal applications, as well as for all those external facing applications for your customers. Is that correct?

Nair: Right now, we're testing that internally for our development and test platforms, as you just said, and we are about to launch that into a production environment when we are fully versed in how to handle that. It’s a powerful tool and we want to be sure that we can manage it properly in the production world.

Gardner: But that's the goal -- to have a common infrastructure to support all those types of requirements and workloads.
One admin can do the work of at least three admins, once we’ve fully implemented the cloud.

Nair: Absolutely. That is the goal. That’s where we're headed.

Gardner: And that again gives you that agility, but also I think your total cost would be something to better manage when you're able to put it all into the same management capability.

Nair: That’s what our testing has shown. One admin can do the work of at least three admins, once we’ve fully implemented the cloud, because the buildup and takedown are some of the most expensive portions of creating a server. You can automate that fully and not have to worry about the takedown, because you can say, "Three days from now please remove the server from the grid." Then, the admin can go do some other tasks.

Gardner: Tell me what you actually have running there in terms of the type of hardware and how many virtual machines (VMs) you’ve got on a server? Are you using blades, and what are the applications and networking that you use?

Nair: We run Dell hardware, Dell servers, and Dell blades, and that's where we run production. In development, we also use Dell hardware, where we just use the R610s, 710s, and 810s, basically small machines, but with a fairly good amount of power. We can load up to 20 servers on in development, and as many as 12 in production. We run about 275 VMs today.

Gardner: What sort of apps? Do you cover the gamut of apps? Are they mission-critical, back-office, Web-facing? What’s the breakdown of the type of applications you're supporting in your virtualized environment?

Cutting-edge technologies

Nair: Our production software is software as a service (SaaS), so a majority of that runs on IIS Web servers, with SQL backend. We also use some new cutting-edge database technologies, MongoDB, which also runs on a virtual system.

In addition, we have our infrastructure, like our customer relationship management (CRM), for which we use SugarCRM, and our ticketing system, which is JIRA, and our collaboration tool called Confluence, as well as our build system, which is TeamCity.

All run on VMs. Our infrastructure is powered on VMs, so it’s pretty important that it stays up. It’s one of the reasons that we think running it on a SAN, with the ability to use VMotion, does help our uptime.

Gardner: Of course, you had an opportunity to go with a number of different providers on virtualization. What was it that attracted you to VMware and the full suite and full packaging of VMware’s software in this case?

Nair: A few different things attracted us to VMware. One of them was the fact that VMware fully supported different operating systems. A I said earlier, we run Red Hat, as well as Debian and Windows. When we ran those on different public and other proprietary virtualization products, we found different issues in each one.
We wanted to be able to pick up the phone, ask someone immediately, and get knowledgeable support.

For example, one of them had a time drift, where it didn’t keep time as well as it did on Windows. On Linux the time always seemed to drift a little bit. Apparently they hadn’t mastered that. Some free products did not have the ability to run Windows. They could run other versions of Linux. They couldn't run Windows properly at the time we were testing. But VMware, out of the box, could run all those operating systems.

The second thing was the support level. We didn’t want to be running our production system, put a bug out there in the community, and wait for someone to answer while we were down. We wanted to be able to pick up the phone, ask someone immediately, and get knowledgeable support. So support was a key ingredient in our selection.

We do have that option today when we have an issue. We can call up VMware and get that support. So it was support, compatibility, and the overall ecosystem. We knew that as we grew, we wouldn’t have to switch to another vendor to get cloud. We knew that we could go to VMware and get the cloud solution, as well as the virtualization solution, because virtualization was just the first step to us to become fully virtualized in a private cloud environment, with software, security like vShield and vCenter Operations Manager.

Gardner: Seeing as you’ve made that progression through virtualization, you’ve tested it out on a pilot basis internally, particularly in that heavy-duty use case, like development and test, and now of course moving towards the full private cloud with all those other workloads and applications. Any words of advice to others who are perhaps just beginning that journey? When they get started, what sort of things do you think they should keep in mind?

Nair: The first thing we did was take the trial version and started running it in a non-critical environment, where we just had a few servers that we were building out as our developers needed it, and it was actually for a data-testing scenario.

We got good at it ourselves. We learned the Java scripting that was required to bring up those systems. We didn’t have that knowledge ahead of time in the systems engineering group. We had developers who had that knowledge, of course, but to get our systems engineers to be able to script to bring up a server was very useful when we played around with it.

Virtualization lab

We actually had a little virtualization lab, where we practiced these things, because as the old adage says, practice does make perfect. The next thing was that we rolled it out in incremental steps to one product, and then eventually to a larger development group.

Gardner: Looking to the future, is there anything about mobile support or increasing the types of services that you're going to provide to your community banks, more along the lines of extended services that you provide and they brand? Do you think that this cloud environment is going to enable you to pursue that?

Nair: Yes, we’ve already started down that path. We have mobile support for the websites that we’ve created, and we’ve just implemented that earlier this year. Eventually, we plan to go into the online banking space and provide online banking for mobile devices. All that will be done in our cloud infrastructure. So yes, it’s here to stay.
Eventually, we plan to go into the online banking space and provide online banking for mobile devices.

Gardner: Because we're here at VMworld, I assume you're taking some good, hard looks at some of the newer VMware products. Is there any other VMware product that you're anticipating using or at least particularly interested in?

Nair: We want to look further at the automation that the cloud products would give us, especially with security in vShield. It’s pretty interesting how we can have a virtual firewall with our VMs and look at the other mobile software that's available.

Gardner: I'm afraid we'll have to leave it there. We’ve been talking about how banking services provider, BancVue, has been able to provide business agility to its community bank customers. And we’ve also seen how a private-cloud model is rapidly furthering their achievements in server virtualization, while allowing them to better manage their workloads and even cut costs.

I’d like to thank our guest. We’ve been here with Sunny Nair. He is the Vice President of IT and Systems Operations at BancVue, in Austin, Texas. Thanks so much, Sunny.

Nair: Thank you.

Gardner: And thanks to our audience for joining this special podcast coming to you from the 2012 VMworld Conference in San Francisco. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of podcast discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast from the 2012 VMworld Conference on how one company has been able to provide business agility to its customers. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Friday, October 05, 2012

Internet of Mobile and Cloud Era Demands New Kind of Diverse and Dynamic Performance Response, Says Akamai GM

Transcript of a BriefingsDirect podcast on the inadequacy of the old one-size-fits-all approach to delivering web content on different devices and different networks.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Akamai Technologies.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the new realities of delivering applications and content in the cloud and mobile era. We'll examine how the many variables of modern Internet usage demand a more situational capability among and between enterprises, clouds, and the many popular end devices.

That is, major trends have conspired to make inadequate a one-size-fits-all approach to today’s complex network optimization and applications performance demands. Rather, more web experiences now need a real-time and dynamic response tailored and refined to the actual use and specifics of that user’s task.

We're here with an executive from Akamai Technologies to spotlight the trends leading to this new dynamic cloud-to-mobile network reality, and to evaluate ways to make all web experiences remain valued, appropriate, and performant.

With that, please join me now in welcoming our guest, Mike Afergan, Senior Vice President and General Manager of the Web Experience Business Unit at Akamai Technologies in Cambridge, Massachusetts. Welcome back, Mike. [Disclosure: Akamai Technologies is a sponsor of BriefingsDirect podcasts.]

Michael Afergan: Hi, thanks, Dana.

Gardner: Trends that seem to be spurring a different web, a need for a different type of response, given the way that people are using the web now. Let’s start at the top. What are the trends, and what do you mean by a "situational response" to ameliorating this new level of complexity?

Afergan: There are a number of trends, and I'll highlight a few. There’s clearly been a significant change, and you and I see it in our daily lives in how we, as consumers and employees, interact with this thing that we call the web.

Only a few years ago, most of us interacted with the web by sitting in front of the PC, typing on a keyboard and with a mouse. Today, a large chunk, if not a majority, of our interaction with the web is through different handheld devices or tablets, wi-fi, and through cellular connections. More and more it's through different modes of interaction.

For example, Siri is a leader in having us speak to the web and ask questions of the web verbally, as opposed to using a keyboard or some sort of touch-screen device. So there are some pretty significant trends in terms of how we interact as consumers or employees, particularly with devices and cellular connectivity.

Behind the scenes there’s a lot of other pretty significant changes. The way that websites have been developed has significantly changed. They're using technology such as JavaScript and CSS in a much heavier way than ever before.

Third-party content

We're also seeing websites pull in a variety of content from third parties. Even though you're going to a website, and it looks like it’s a website of a given retailer, more often than not a large chunk of what you are seeing on that page is actually coming from their business partners or other people that they are working with, which gets integrated and displayed to you.

We're seeing cellular end-devices as a big trend on the experience side. We're seeing a number of things happen behind the scenes. What that means is that the web, as we thought about it even a few years ago, is a fundamentally different place today. Each of these interactions with the web is a different experience and these interactions are very different.

A user in Tokyo on a tablet, over a cellular connection, interacting with the website is a very different experience situation than me at my desk in Cambridge, in front of my PC right now with fixed connectivity. This is very different than me or you this evening driving home, with an iPhone or a handheld device, and maybe talking to it via Siri.

Each of these are very different experiences and each of these are what I call different situations. If we want to think about technology around performance and we want to think technology involving Internet, we have to think about these different situations and what technologies are going to be the most appropriate and most beneficial for these different situations.

Gardner: So we have more complexity on the delivery side, perhaps an ecosystem of different services coming together, and we also have more devices, and then of course different networks. And as people think about the cloud, I think the missing word in the cloud is the networks. There are many networks involved here.
There are some trends in which the more things change, the more they stay the same.

Maybe you could help us understand with these trends that delivery is a function of many different services, but also many different networks. How does that come together?

Afergan: There are some trends in which the more things change, the more they stay the same. The way the Internet works fundamentally hasn’t changed. The Internet is still, to use the terminology from over a decade ago, a network of networks. The way that data travels across the Internet behind the scenes is by moving through different networks. Each of those has different operating principles in terms of how they run, and there are always challenges moving from one network to another.

This is why, from the beginning, Akamai has always had a strategy of deploying our services and our servers as close to the users as possible. This is so that, when you and I make a request to a website, it doesn't have to traverse multiple networks, but rather is served from an Akamai location as close as possible to you.

And even when you have to go all the way across the Internet, for example, to buy something and submit a credit card, we're finding an intelligent path across the network. That's always been true at the physical network layer, but as you point out, this notion of networks is being expanded for content providers, websites, and retailers. Think about the set of companies that they work with and the other third parties that they work with almost as a network, as an ecosystem, that really comes together to develop and ultimately create the content that you and I see.

This notion of having these third party application programming interfaces (APIs) in the cloud is a very powerful trend for enterprises that are building websites, but it also obviously creates a number of challenges, both technical and operational, in making sure that you have a reliable, scalable, high-performing web experience for your users.

Big data

Gardner: I suppose another big trend nowadays -- we've mentioned mobile and cloud -- is this notion of analytics, big data, trying to be more intelligent, a word you used a moment ago. Is there something about the way that the web has evolved that's going to allow for more gathering of information about what's actually taking place on the networks and these end-devices, and then therefore be able to better serve up or produce value as time goes on?

Is the intelligence something that we can measure? Is there a data aspect to this that comes into that situational benefit path?

Afergan: One of the big challenges in this world of different web experience and situations is a greater demand for that type of information. Before, typically, a user was on a PC, using one of a few different types of browsers.

Now, with all these different situations, the need for that intelligence, the need to understand the situation that your user is in -- and potentially the changing situation that your user is in as they move from one location to another or one device to another -- is even more important than it was a few years ago.

That's going to be an important trend of understating the situations. Being able to adapt to them dynamically and efficiently is going to be an important trend for the industry in the next few years.
More and more employees are bringing their increasingly powerful devices into the office.

Gardner: What does this mean for enterprises? If I'm a company and I recognize that my employees are going to want more variety and more choice on their devices, I have to deliver apps out to those devices. I also have to recognize that they don't stop working at 5 pm. Therefore, our opportunity for delivering applications and data isn't time-based. It's more of a situational-based demand as well.

I don’t think enterprises want to start building out these network capabilities as well as data and intelligence gathering. So what does it mean for enterprises, as they move toward this different era of the web, and how should they think about responding?

Afergan: You nailed it with that question. Obviously one of the big trends in the industry right now, in the enterprise industry, bring your own device (BYOD). You and I and lots of people listening to this probably see it on a daily basis as we work.

In front of me right now are two different devices that I own and brought into the office today. Lots of my colleagues do the same. We see that as a big trend across our customer base.

More and more employees are bringing their increasingly powerful devices into the office. More and more employees want to be able to access their content in the office via those devices and at home or on the go, on a business trip, over those exact same devices, the way we've become accustomed to for our personal information and our personal experiences online.

Key trends

So the exact same trend that you think about being relevant for consumer-facing websites -- multiple devices, cellular connectivity -- are really key trends that are being driven from the outside-in, from the employees into the enterprise right now. It’s a challenge for enterprise to be able to keep up. It’s a challenge for enterprises to be able to adapt to those technologies, just like it is for consumer websites.

But for the enterprise, you need to make sure that you are mindful of security, authentication, and a variety of other principles, which are obviously important once you are dealing with enterprise data.

There’s tremendous opportunity. It is a great trend for enterprises, in terms of empowering their employees, empowering their partners, decreasing the total cost of ownership for the devices, and for their users to have access to the information. But it obviously presents some very significant trends and challenges. Number one, obviously, is keeping up with those trends, but number two, doing it in a way that’s both authenticated and secure at the same time.

Gardner: Based on a lot of the analyst reports that we're seeing, the adoption of cloud services and software-as-a-service (SaaS) services by enterprises is expected to grow quite rapidly in the coming years. If I'm an enterprise, whether I'm serving up data and applications to my employees, my business partners, and/or end consumers, it doesn’t seem to make sense to get cloud services, bring them into the enterprise, and then send them back out through a network to those people. It sounds like this is moving from a data center that I control type of a service into something that’s in the cloud itself as well.

So are we reading that correctly -- that even your bread and butter, Global 2000 enterprise has to start thinking about network services in this context of a situational web?
You're now talking about putting those applications into the cloud, so that those users can access them on any device, anywhere, anytime.

Afergan: Exactly. The good news is that most thoughtful enterprises are already doing that. It doesn’t make it easier overnight, but they're already having those conversations. You're exactly right. Once you recognize the fact that your employees, your partners are going to want to interact with these applications on their devices, wherever they may be, you pretty quickly realize that you can’t build out a dedicated network, a dedicated infrastructure, that’s going to service them in all the locations that they are going to need to be.

All of a sudden, you're now talking about putting those applications into the cloud, so that those users can access them on any device, anywhere, anytime. At that point in time, you're now building to a cloud architecture, which obviously brings a lot of promise and a lot of opportunity, but then some challenges associated with it.

Gardner: I'll just add one more point on the enterprise, because I track enterprise IT issues more specifically than the general web. IT service management, service level agreements (SLAs), governance policy and management via rules that can be repeatable are all very important to IT as well.

Is there something about a situational network optimization and web delivery that comes to play when it relates to governance policy and management vis-à-vis rules; I guess what you'd call service-delivery architecture?

Situational needs

Afergan: That’s a great question, and I've had that conversation with several enterprises. To some degree, every enterprise is different and every application is somewhat different, which even makes the situational point you are making all the more true.

For some enterprises, the requirements they have around those applications are ubiquitous and those need to be held true independent of the situation. In other cases, you have certain requirements around certain applications that may be different if the employee is on premises, within your VPN, in your country, or out of the country. All of a sudden, those situations became all the more complicated.

As each of these enterprises that we have been working with think through the challenges that you just listed, it's very much a situational conversation. How do you build one architecture that allows you to adapt to those different situations?

Gardner: I think we have described the problem fairly well. It's understood. What do we start thinking about when it comes to solving this problem? How can we get a handle on these different types of traffic with complexity and variability on the delivery end, on the network end, and then on the receiving end, and somehow make it rational and something that could be a benefit to our business?

Afergan: It's obviously the challenge that we at Akamai spend a lot of time thinking about and working with our customers on. Obviously, there's no one, simple answer to all of that, but I'll offer a couple of different pieces.
For some enterprises, the requirements they have around those applications are ubiquitous and those need to be held true independent of the situation.

We believe it requires starting with a good overall, fundamentally sound architecture. That's an architecture that is globally distributed and gives you a platform where you don't have to -- to answer some of your earlier questions -- worry about some of the different networks along the way, and worry about some of the core, fundamental Internet challenges that really haven't changed since the mid-'90s in terms of reliability and performance of the core Internet.

But then it should allow you to build on top of that for some of the cloud-based and situational-based challenges that you have today. That requires a variety of technologies that will, number one, address, and number two, adapt to situations that you're talking about.

Let's go through a couple of the examples that we've already spoken about. If you're an enterprise worrying about your user on a cellular connection in Hong Kong, versus you're the same enterprise worrying about the same application for a user on a desktop fixed-connection based in New York City, the performance challenges and the performance optimizations that you want to make are going to be fundamentally different.

There is a core set of things that you need to have in place in all those cases. You need to have an intelligent platform that's going to understand the situation and make an appropriate decision based on that situation. This will include a variety of technical variables, as well as just a general understanding of what the end user is trying to do.

Gardner: It seems like it wasn't that long ago, Mike, that people said, "I just want to make things 50 percent faster. I want to make my website speedier." But that's almost an obsolete question. It's more, "How do I make a specific circumstance perform in a specific way for a specific user and that might change in five minutes?"

So how do we rethink moving from fatter pipes and faster websites to these new requirements? Is this a cultural shift? Is it moving from a two-dimensional to a three-dimensional picture? How do we create a metaphor or analogy to better understand the difference and the type of problem we need to solve?

Complicated problem

Afergan: Again, it's a complicated problem. Start again with the good news that the reason we're having this problem is that there are these powerful situations and powerful opportunities for enterprises, but the smart enterprises we're working with are asking a couple of different questions.

First, there is a myriad of situations, but typically you can think about some of them that are the most important to you to start off with.

The second thing that enterprises are doing thoughtfully is rethinking how you even do performance measurement. You just gave a great example. Before, you could talk about how do I make this experience 50 percent faster, and that was a fine conversation.

Now, smart enterprises are saying, "Tell me about the performance of my users in Hong Kong over cellular connections. Tell me about the performance of my users in New York City over fixed connections." Then it's understanding the different dimensions and different variables that are important for you and then measuring performance based on those variables.

I work with several thoughtful enterprises that are going through that transformation of moving from a one-size-fits-all performance measurement metric to being a lot more thoughtful about what metrics they care about. Exactly as we've talked about, and exactly as you mentioned, that one-size-fits-all metric is becoming less relevant by the day.
You need to have an underlying architecture that allows you to operate across a variety of the parties.

Gardner: And as we have more moving parts, we perhaps could think about it as a need for a Swiss Army Knife of some sort, where multiple tools can be brought out quickly and applied to what's needed. But that needs to be something that's coordinated, not just by the enterprise, the Internet service provider (ISP), the networks, or the cloud providers -- but all of them. Getting them to line up, or having one throat to choke, if you will, has always been a challenge.

Is there something now, or is there something about Akamai in particular, that gets you neutrality? We mentioned the Swiss Army Knife. Is there some ability for you to get in and be among and in a positive value development relationship with all of these players that perhaps is what we are starting to get to when we think about the situational benefit?

Afergan: It's obviously something we spend a lot of time thinking about here. In general, not just speaking about Akamai for the moment, to be successful here, you need to have a few things.

You need to have an underlying architecture that allows you to operate across a variety of the parties you mentioned.

For example, we talked about a variety of networks, a variety of ISPs. You need to have one architecture that allows you to operate across all of them. You can't go and build different architecture and different solution ISP by ISP, network by network, or country by country. There's no way you're going to build a scalable solution there. So first and foremost, you need that overall ubiquitous architecture.

Significant intelligence

The second thing you need is significant intelligence to be able to make those decisions on the fly, determine what the situation, and what would be the most beneficial solution and technology applied to that situation.

The third thing you need is the right set of APIs and tools that ultimately allows the enterprise, the customer, to control what's happening, because across these situations sometimes there is no absolute right answer. In some cases, you might want to suddenly degrade the fidelity of the experience to have it be a faster experience for the user.

Across all of these, having the underlying overall architecture that gives you the ubiquity, having the intelligence that allows you to make decisions in real-time, and having the right APIs and tools are things that ultimately we at Akamai spend a lot of time worrying about.

We sit in a unique position to offer this to our customers, working closely with them and their partners. And all of these things, which have been important to us for over a decade now, are even more important as we sail into this more complicated situationally driven world.

Gardner: We're almost out of time, but I wonder about on-ramps or adoption paths for organizations like enterprises to move toward this greater ability to manage the complexity that we're now facing. Perhaps it’s the drive to mobility, perhaps it’s the consumption of more cloud services, perhaps it’s the security- and governance and risk and compliance-types issues like that, or all of the above. Any sense of how people would find the best path to get started and any recommendations on how to get started?
Each company has a set of challenges and opportunities that they're working through at any point in time.

Afergan: Ultimately, each company has a set of challenges and opportunities that they're working through at any point in time. For us, it begins with getting on the right platform and thinking about the key challenges that are driving your business.

Mobility clearly is a key trend that is driving a lot of our customers to understand and appreciate the challenges of situational performance and then try to adapt it in the right way. How do I understand what the right devices are? How do I make sure that when a user moves to a less performing network, I still give them a high quality experience?

For some of our customers, it’s about just general performance across a variety of different devices and how to take advantage of the fact that I have a much more sophisticated experience now, where I am not just sending HTML, but am sending JavaScript and things I could execute on the browser.

For some of our customers it's, "Wait a minute. Now, I have all these different experiences. Each one of these is a great opportunity for my business. Each one of these is a great opportunity for me to drive revenue. But each one of these is now a security vulnerability for my business, and I have to make sure that I secure it."

Each enterprise is addressing these in a slightly different way, but I think the key point is understanding that the web really has moved from basic websites to these much more sophisticated web experiences.

Varied experiences

The web experiences are varied across different situations and overall web performance is a key on-ramp. Mobility is another key on-ramp that you, and security would be a third initial starting point. Some of our customers are trying to take a very complicated problem and look at it through a much more manageable lens, so they can start moving in the right direction.

Gardner: I am afraid we will have to leave it there. We've been discussing how most cloud experiences now need a more real-time and dynamic response, perhaps tailored and refined to the actual use and specifics of a user’s task at hand.

And we've heard about how a more situational capability that takes into account many variables at an enterprise, cloud, and network level, and then of course across these end devices that are now much more diverse and distributed, all come together for a new kind of value.

I'd like to thank our guest. We've been here with Mike Afergan, the Senior Vice President and General Manager of the Web Experience Business Unit at Akamai Technologies.

Thank you so much, Mike.

Afergan: Thanks, Dana. I really appreciated the time.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. A big thank you also to our audience for listening, and don’t forget to come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Akamai Technologies.


Transcript of a BriefingsDirect podcast on the inadequacy of the old one-size-fits-all approach to delivering web content on different devices and different networks. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in: