Showing posts with label BriefingDirect. Show all posts
Showing posts with label BriefingDirect. Show all posts

Friday, October 03, 2008

BriefingsDirect Insights Analysts Examine HP-Oracle Exadata Release, Extreme BI, Virtualization and Cloud Computing Trends

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 30, on Exadata, extreme BI and cloud computing, recorded Sept. 26, 2008 from Oracle OpenWorld in San Francisco.

Listen
to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsors: Active Endpoints, Hewlett-Packard.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 30. This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our sponsors, charter sponsor Active Endpoints, maker of the ActiveVOS visual orchestration system, and also Hewlett-Packard via the HP Live! Podcast Series.

I'm your Host and moderator Dana Gardner, principal analyst at Interarbor Solutions. Our panel this week consists of Joe McKendrick, an independent analyst and prolific blogger on SOA and BI topics. Hi, Joe!

Joe McKendrick: Hi, Dana, glad to be here.

Gardner: We are also joined by Brad Shimmin, a principal analyst at Current Analysis. Hey, Brad!

Brad Shimmin: Hi, Dana, thanks for having me.

Gardner: Jim Kobielus joins us. He's a blogger and senior analyst at Forrester Research. Hello, Jim!

Jim Kobielus: Hey, Dana, and Hi, everybody!

Gardner: And Dave Linthicum, blogger, independent consultant, joins us this week. Thanks for coming, Dave.

Dave Linthicum: Thanks, Dana, thanks for having me back.

Gardner: We are going to be talking about the news of the week of Sept. 22, 2008. We'll be looking at the HP-Oracle announcements and other news made here at Oracle OpenWorld. We'll be talking about cloud computing and the notion of "on-premises" or "private clouds," and how data portability might actually work among and between different clouds -- both "public," if you will, and "private." We'll look at recent virtualization news from VMware, HP, Red Hat and Citrix.

An Exadata 'Shocker' ...

Let's start our show this week with Jim Kobielus. Jim, you and I are both here at the Oracle OpenWorld. We had an unusual announcement around optimization between hardware and software from Oracle, which has traditionally been a software-only company.

Oracle and HP introduced two Exadata products. I wonder if you could fill in our audience on what Oracle did this week.

Kobielus: Yes, this week Oracle announced the release, in partnership with HP, of a very high-end data warehousing appliance. They may not use the word "appliance," but that's in fact what it is. It's a configured and optimized bundle of hardware and software, with database storage, so it meets very high-end data warehousing requirements. It's called the HP Oracle Database Machine.

It encompasses and includes the HP Oracle Exadata Storage Server, which is a grid storage level server. One of the key differentiators on that storage approach is that it puts the query processing in the storage subsystem. As a result, it can greatly speed up the processing of very complex analytics. What Oracle and HP have essentially done is take a page from the Netezza book, because that is, of course, the feature of the Netezza performance system. [Netezza] has an appliance that they accelerate your data models through by putting this whole processing back close to storage. But the HP Oracle release does much more than simply taking the page out of that Netezza book.

What they did essentially is they also shot across Teradata's bow, because this is Oracle's petabytes-scale, data warehousing-solution platform. The HP Oracle Database Machine that they demonstrated at the show definitely screams. And it can scale -- Oracle says that it can scale with almost no limits, and that remains to be seen.

But it can definitely go to a much higher scale in terms of capacity than the current Oracle Optimized Warehouses that they have already begun to ship with HP and a variety of other hardware partners. The Oracle Optimized Warehouses max out at several hundred terabytes. And, as I said, the HP Oracle Database Machine can go well beyond that.

This is a shot both across Teradata's bow and across Netezza's. And Oracle Chairman and CEO Larry Ellison from the stage directly honed in on both of those competitors by name.

It was classic Ellison, a very well put together presentation. But quite frankly when you begin to analyze the various claims he made, they don't all hold up. Or rather, he is presenting a lot of the Oracle-specific differentiation. Yet they were very impressive.

Gardner: This is a significant departure for Oracle and HP on several different levels. On one hand, we have a combined hardware-software product from two different vendors. We also have a new parallelization process, with their architectural design, where the database and the storage are very close. The processing can take advantage of massively parallel processing. We also have the fat pipes in the form of InfiniBand connections.

So we have an architectural departure. We have a hardware-software departure, and we also have this interesting alliance between HP and Oracle, making and selling a product together. How does this all strike you, Brad Shimmin?

Shimmin: Well, I was shocked, shocked, absolutely simply shocked. This is because historically Oracle has strayed so far away from the appliance market. It's been surprising to me on a number of occasions when I have spoken with them about acquisitions. Actually they made one acquisition earlier this year, they acquired a company that had an appliance that was very successful, and they chose to simply kill it outright because they “did not want to play in the space.”

With that said, I am glad to see this happening. I really am, and I think that HP is a good partner with them because I don't feel that the two companies really bumped into one another in terms of Oracle's core constituency. So I think it's a good play all around, and I am glad to see Oracle finally getting into this. Now if only they would release parts of their middleware as an appliance, I would be very happy.

Kobielus: And, in fact, Brad, Larry Ellison indicated that they seem to have some plans for that. They really resisted details -- but they seem to have some plans to "appliance-ize," if that's the word, more and more the Oracle Fusion Middleware stack.

Shimmin: There are other quite prominent middleware stack players that are moving to appliances, as well. I can't mention their names but the use of appliances seems to be of great interest to more vendors.

Gardner: I have also been picking up on this interest in the appliance business. IBM has been into this with DataPower SOA Appliances for a while now, but IBM has not really extended use of appliances out as widely as I was expecting. I have also heard that TIBCO may be building an appliance for complex event processing (CEP). So, yes, I think we are going to see more of this.

Brad, I want to go back for one second to the HP-Oracle relationship. It almost seems now that Oracle has anointed HP at some level as a preferred hardware supplier on storage, if not also other aspects of hardware. What does that mean for EMC and some of the other storage hardware providers? They are no longer on an independent or third-party-friendly level with Oracle, right?

Shimmin: Absolutely. I think that all of those relationships will come under strain from this. There is no question about that. And it seems to me that this makes Oracle look a lot more like both IBM and Sun Microsystems and EMC, in terms of having some sort of competency in hardware. So I think there are going be a lot of far-ranging ripples from this relationship that will change the way the market functions.

Gardner: And how is all of this going to come to market? If you want to buy the Exadata warehouse you actually have to go through the Oracle sales force. Oracle is going to support it, and sell it, price it, and then HP is going to service the hardware. So in essence, HP is the supplier to Oracle, and Oracle is the principal vendor. Does that mean anything anybody out there?

Kobielus: It means that Oracle is taking much more of a marketing lead on the HP Oracle Database Machine than they have with any of the Oracle Optimized Warehouses. So Oracle is very much staking its data warehousing go-to-market strategy on this new product, and on this partnership with HP. That said, HP is providing all of the technical support on the new products. So it's not like Oracle is really becoming a hardware vendor, rather they are going to become very much a software vendor, but has staked their future on delivering the software on this one particular hardware partner's platform.

Gardner: Actually it allows Oracle to operate at a solutions level and so take quite a bit more of the margin across that total data warehouse solution, right? And that undercuts the data array providers significantly.

Okay, so let's talk about what we do this thing. We heard that the 1 terabyte-sized data sets and higher start to hit performance issues. And that then prevents companies from adding more queries on their warehouses, and also reduces the amount of additional data that they want to put into their warehouses.

So we could have hit a wall somewhere around 1 terabyte databases. This approach, this architecture in the Exadata hardware-software optimization claims to blow that away, that it can deal with the largest sets, of 10 terabytes and up, with very high performance. What does this mean for business intelligence (BI) analytics? What does this mean for bringing more types of data and content into the warehouse? What are the business outcome benefits?

Joe McKendrick, what are your thoughts on the BI perspective on this market development?

McKendrick: Well, it certainly moves the business intelligence arena forward. Looking at what the rationale is for having an appliance in this market -- versus what's has been happening for the previous decade with data warehousing -- it really says a lot about what's needed in the market.

Data warehouses, when you get into the multiple terabyte range, are simply too complex and have high cost of ownership. That's made BI a fairly expensive proposition for companies going this route, and the cost is tied into the maintenance, the updates for the warehouse software, the organizational effort, and the input required to make a large data warehouse go.

Now there is a trend emerging. I am sure Oracle has an eye on this as well. It's toward open source. We are seeing more open source in data warehouses too. This is open source at the warehouse level itself, at the database level itself. [Sun Microsystem's] MySQL for example has been pointing in this direction, PostgreSQL as well. [And there's Ingres.]

Gardner: Well, that's another distinct issue. Now with Oracle and HP cooperating, why shouldn't we expect Sun to come out with something quite similar, but with MySQL as the database, and their [Sparc/UltraSparc] processing, and their rack, cooling and InfiniBand, and of course, their storage?

McKendrick: I wouldn't be surprised, I wouldn't be surprised one bit if we see some kind of response from Sun fairly soon because Sun still makes its money from hardware.

Gardner: Right, now if Sun does that then IBM will certainly come out with something around DB2. We should expect that, right?

McKendrick: Yes, yes, definitely. And I think there is an emphasis on simplifying data warehousing, making data warehousing simple for the masses. Microsoft, love them or hate them, has been doing a lot of work in this area by increasing the simplicity of its data warehouse and making it available at more of a commodity level for the small to medium size business space.

I think we're going to see more in the open source data warehousing space, and Oracle is looking at that as well.

Gardner: Let's go to Jim Kobielus. Jim, [using Exadata] we can start taking 10 terabyte data sets and delivering analytics in near real-time and deliver query results out to various business applications on huge scale. We can also start looking at this as cloud infrastructure -- where we are going to be providing data as a service, BI as a service even. And then we have Sun, IBM, and perhaps Hitachi, and all these other guys that are jumping in with their own data warehouse appliances, and they start beating each other up on price, and the price comes down in the market. Are we then entering an era of affordable extreme BI?

Kobielus: For sure. Well, extreme BI, that's really BI and data warehousing with very large data sets, with very demanding real-time loading scenarios, with very extensive concurrent usage and so forth. We are already in that era. If you look at what's going on -- and actual deployment, enterprise deployments -- like 10 terabytes, those are in fact much of the data warehousing solutions in the market. That's the joy of data warehousing, enterprise departments are between 5 and 15 terabytes. They are being handled quite well through a lot of symmetric multi-processing (SMP). So these are around in the market.

Now our data warehousing and BI environments are in the hundreds of terabytes, and up to the petabytes range and beyond. A lot of these are in the cloud already. I can't name names yet. There are a few things right now that I can't show. But well-known Web 2.0 service providers are already above the petabytes scale in terms of the amount of data that's persisted, and in terms of their needs and their ability to do continuous concurrent loads into those humongous data warehouses in real-time. In these extreme data warehousing environments you may have millions upon millions of queries hitting that data warehouse all the time.

Gardner: But aren't we with Exadata taking this from the high-end, roll-your-own, computer-science gee-whiz level down to much more of an off-the-shelf, forklift upgrade level? Aren't we now getting to extreme BI at much more a commodity, or at least something that's much more germane across many more types organization?

Cloud computing gains traction ...

Kobielus: Oh, yeah, for sure. It's getting down into the affordable way to eventually bringing cloud data warehouses down into the range of the main market, as well as for large enterprises. ... The one thing -- one of the other important outcomes from my point of view this week at Oracle OpenWorld -- was the fact that Oracle, now in conjunction with Amazon's Elastic Compute Cloud, has an Oracle cloud -- the existing Amazon cloud can take Oracle database licenses.

They can move those licenses to a cloud, hosted by Amazon EC2. Using tools that Oracle is providing they can move their data to back it up or move databases entirely to be persistent in the cloud, in Amazon's S3 service. So this is very much a lead in. I strongly expect that the other enterprise database vendors over time, maybe in a year or two, we'll also offer similar deployments and flexibility for their data warehousing customers.

Gardner: Okay, let's go to Dave Linthicum on that. Dave, you're familiar with moving data around the cloud. It sounds like people will start getting comfortable with this from a risk and from a reliability/privacy/control issue level. And then it's a no-brainer to start moving fairly massive data sets, or for backups or extend-enterprise sharing or federation of data -- what have you, into cloud infrastructures.

How important from your perspective is what Oracle announced in conjunction with Amazon this week?

Linthicum: I think it's very important. I think that the economics -- that it's much cheaper to do cloud computing than on-premises stuff -- and you can prove that at each and every time, or run into an issue around cultural, and kind of total protection issues within the enterprises ... I think those are falling down as time goes on. Go back in a time machine five years ago, and start talking about running major enterprise applications delivered as SaaS, they would have laughed at you.

Today, everyone is using Salesforce.com, and just a bunch of other SaaS-delivered applications. So enterprises are getting their minds around cloud computing, understanding the concept of it. So moving information into the cloud is not really much of a leap. You already have customer information existing on Salesforce.com, or other SaaS providers out there.

I think that this is one step in the direction that, in essence, we're going back in time a bit, moving back into the time-sharing space. A lot of things are going to be pushed back out into the universe through economy's scale, and also through the value of communities. It just can be a much more cheap and cost effective way of doing it. I think it's going to be a huge push in the next two years.

Gardner: Does it seem reasonable that Oracle would test the waters on this, in terms of market acceptance, with Amazon? Once people get a little more familiar and comfortable with it, then Oracle comes out with its own cloud offerings?

Linthicum: Absolutely, I think that Oracle is going to have a cloud offering, IBM is going to have a cloud offering, Sun is going to have a cloud offering, and it's going to be the big talk in the big industry over the next two or three years. I think they are just going to get out there and fight it out.

I think you are going to have number of startups, too. They are going to have huge cloud offerings as well. They are going to compete with the big guys. And they can -- because it's very simple to put up infrastructure. It's fairly cost effective, and you can get out there and start battling it out with them. Quite frankly, I think, maybe the more agile, smaller companies may win that war.

Virtualization for private clouds ...

Gardner: In other recent news, Brad Shimmin, we have heard quite a bit of virtualization, and cloud compute discussions from VMware, from Citrix, and from HP. We saw some acquisition from Red Hat that brings them into the hypervisor space. Maybe you can help our listeners understand a little bit better the relationship between virtualization, management and platform vendors, and how this whole notion of private or enterprise clouds works.

Shimmin: It depends on the perspective we have, right? It depends on if we are talking about virtualizing the datacenter, virtualizing the desktop (VDI), or moving facets of the datacenter to the cloud. If you are trying to understand how, as we were just talking about, these smaller players are able to use things like Amazon EC2 to get into the market -- or if we are talking about moving the desktop to data center cloud -- what I want to understand as a customer is just what the SLAs and protections are from these provides, whether it's IBM or Amazon. And, by the way, another one we need to mention is Cisco, which will be using the WebEx platform as a SaaS platform and SaaS solution for the enterprise.

The point is that as a customer you don't just want to know what the [performance reliability figures] are, you want to know what sort of wrapper these vendors are putting around their solutions for things like security and policy management enforcements. It's not just the fact that they will be able to secure the data, but it's about being able to control and manage the data, and have visibility into the data; whether it's something that's sitting in some sort of virtualized instance in your own datacenter, or whether it's something that's sitting in some federated system that might be shared between Cisco and Amazon.

Gardner: I found it interesting that these vendors are basically tripping over themselves and rushing out to the market, way before these private clouds have even established themselves. Yet the vendors are declaring that they have the infrastructure and the approach to do it. It sort of reminds me of a platform, or even operating system, land grab -- that getting there first and establishing some of the effective standards and coming up with industry-common implementations gives them an opportunity to at some level or format create the de facto portability means.

This is a layer above virtualization. And virtualization is there to bring all the legacy stuff into play, but what do you do with the new applications? What do you do with the new services? Dave Linthicum, what are your thoughts on a meta-operating system in the cloud? Are we in a kind of a race to be first to that?

Linthicum: I think that's a ways off at this point. I think people are going to put aspects of the infrastructure up in the cloud first. And I think that the platform-as-a-service (PaaS) and the ability to provide a development infrastructure, storage infrastructure, some deployment infrastructure, and things like that -- that is all going to be a bit of mix and match. I think people are going to do little tactical projects to kind of dip their toe in the water to see if it is viable.

However as we go forward, I think that's the destination. If you look at how everything is going, I think everything is going to be pushed up into the cloud. People are basically going to have virtual platforms in the cloud, and that's how they are going to drive it. Just from a cost standpoint, everything we just discussed, the advantages are going to be for those who get there first.

I think that very much like the early adopters of the Web, back in the 1990s, this is going to be the same kind of a land grab, and the same kind of land rush that's going to occur. Ultimately you are going to find 60 percent to 80 percent of the business processes over the next 10 years are going to be outsourced.

Gardner: What about this issue of data, these massive data sets, and bringing some of that up into a cloud? Is it going to be just standards-based interoperability for my data set and your data set to play well with each other? To what level are we hung up by different cloud implementations, and therefore perhaps also different data implementations? Does that need to be solved?

Linthicum: Yes, I think it does. I think that you are going to find that integration does occur in the clouds, just like it does within the enterprise, from enterprise to enterprise. The reality is that people have information up there with different semantics, different data formats, and all of that stuff has to be transferred one to another.

I think that the idea of integration in the cloud, which I have been involved with personally over the last 10 years, may actually start to be used. I think that people are going to have to do transformation around control, filtering, all of these things as information moves between these partitions out in the universe. Ultimately integration is going to be easier. We know lot more than we did 15 years ago when I wrote the book, Enterprise Application Integration. But I think that it's still going to be needed and a necessary thing. So maybe integration in the cloud companies should start pushing forward.

Kobielus: I hear what you're saying. I think that's an important point to put forward, which is that these clouds, these data warehousing clouds -- data warehouses that are external to the firewalls, the multi-tenant environments -- are multi-domain, multi-entity data warehouses with strict separations between the various domains, which are often searching with particular customers. But like a supply chain application in the cloud, it is the probably the best place to put all that data so that companies and suppliers and the customers all have access to common pooled data in a common externally hosted environment.

What that raises then is that the data warehouses in the cloud, really become data federation in the cloud. So all these different data sets, the divergent schema and so forth, need to be normalized to a common semantic layer in the cloud provided by that cloud vendor. So then you are into the data federation vendors that had a huge footprint in the enterprise, those guys then need to provide their capabilities in the cloud for these types of supply chain and B2B applications.

I am talking with a couple companies, like Composite Software and some others, where they have well established data federation to manage virtualization layers. Those guys need to get cracking to put a lot of that into a cloud environment to enable this level of data integration and federation in that cloud environment going forward, and make it scalable.

Gardner: Well I think we will have to leave it there. We have been discussing announcements from Oracle OpenWorld, other news in the virtualization space, and how these relate to the future of "extreme BI," as well as what cloud infrastructures might look like from a variety of vendors in the future. I want to thank our panel for joining us for BriefingsDirect Analyst Insights Edition, Vol. 30.

I also want to thank our charter sponsor for supporting our podcast, Active Endpoints, maker of the ActiveVOS visual orchestration system, and Hewlett-Packard via the HP Live! Podcast Series. This is Dana Gardner, principal analyst at Interarbor Solutions, thanks for listening, and come back next time.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsors: Active Endpoints, Hewlett-Packard.

Transcript of BriefingsDirect podcast on Exadata, extreme BI and cloud computing. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Wednesday, December 19, 2007

Holiday Peak Season Hits for Retailers Alibris and QVC -- A Logistics and Shipping Carol

Transcript of BriefingsDirect podcast on peak season shipping efficiencies and UPS retail solutions with Alibris and QVC.

Listen to the podcast here. Sponsor: UPS.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions and you're listening to BriefingsDirect.

Today, a sponsored podcast discussion about the peak holiday season for retail shopping -- online and via television -- and the impact that this large bump in the road has logistically and technically for some major retailers.

We’re going to discuss how Alibris, an online media and bookseller, as well as QVC, a global multimedia shopping network, handles this peak demand issue. The peak is culminating for such shippers as UPS this week, right around Dec. 19, 2007.

We’re going to talk about how the end-user in this era of higher expectations is now accustomed to making a phone call or going online to tap in a few keystrokes, and then -- like Santa himself -- having a package show up within a day or two. It's instant gratification, if you will, from the logistics point-of-view.

Helping us understand how this modern miracle can be accomplished at such high scale and with such a huge amount of additional capacity required during the November and December shopping period, we’re joined by two guests. We’re going to be talking with Mark Nason, vice president of operations at Alibris, and also Andy Quay, vice president of outbound transportation at QVC. I want to welcome you both to the show.

Mark Nason: Thank you, Dana.

Gardner: Tell us a little bit about what’s different now for Alibris, given the peak season demands, over just a few years ago. Have the expectations of the end-user really evolved, and how do you maintain that sort of instant gratification despite the level of complexity required?

Nason: What we strive for is a consistent customer experience. Through the online order process, shoppers have come to expect a routine that is reliable, accurate, timely, and customer-centric. For us to do that internally it means that we prepare for this season throughout the year. The same challenges that we have are just intensified during this holiday time-period.

Gardner: For those who might not be familiar, tell us a little about Alibris. You sell books, used books, out-of-print books, rare media and other media -- and not just directly, but through an online network of independent booksellers and retailers. Tell us more about how that works.

Nason: Alibris has books you thought you would never find. These are books, music, movies, things in the secondary market with much more variety, and that aren’t necessarily found in your local new bookseller or local media store.

We aggregate -- through the use of technology -- the selection of thousands of sellers worldwide. That allows sellers to list things and standardize what they have in their store through the use of a central catalogue, and allows customers to find what they're looking for when it comes to a book or title on some subject that isn’t readily available through their local new books store or media seller.

Gardner: Now, this is a very substantial undertaking. We're talking about something on the order of 70 million books from a network of some 10,000 booksellers in 65 or more countries. Is that right?

Nason: Roughly, that’s correct. Going in and out of the network at any given time, we've got thousands of sellers with literally millions of book and other media titles. These need to be updated, not only when they are sold or added, but also when they are priced. Prices are constantly changing. It’s a very dynamic market.

Gardner: What is the difference in terms of the volume that you manage from your slowest time of the year compared to this peak holiday period, from mid-November through December?

Nason: It’s roughly 100 percent.

Gardner: Wow!

Nason: In this industry there are actually two peak time periods. We experience this during the back-to-school season that occurs both in January and the latter-half of August and into September.

Gardner: So at the end of the calendar year you deal with the holidays, but also for those college students who are entering into their second semester?

Nason: Exactly. Our peak season associated with the holidays in December extends well into January and even the first week of February.

Gardner: Given this network and the scale and volume and the number of different players, how do you manage a consistent response to your customers, even with a 100 percent increase at the peak season?

Nason: Well, you hit on the term we use a lot -- and that is "managing" the complexity of the arrangement. We have to be sure there is bandwidth available. It’s not just staffing and workstations per se. The technology behind it has to handle the workload on the website, and through to our service partners, which we call our B2B partners. Their volume increases as well.

So all the file sizes, if you will, during the transfer processes are larger, and there is just more for everybody to do. That bandwidth has to be available, and it has to be fully functional at the smaller size, in order for it to function in its larger form.

Gardner: I assume this isn’t something you can do entirely on your own, that you depend on partners, some of those B2B folks you mentioned. Tell us a little bit about some of the major ones, and how they help you ramp up.

Nason: In the area of fulfillment, we rely heavily on our third-party logistics partners, which include carriers. At our distribution centers, typically we lease space, equipment, and the labor required to keep up with the volume.

Then with our B2B partners -- those are the folks that buy from us on a wholesale or distribution basis -- we work out with them ahead of time what their volume estimates might be and what their demands on us would be. Then we work on scheduling when those files might come through, so we can be proactive in fulfilling those orders.

Gardner: When it comes to the actual delivery of the package, tell us how that works and how you manage that complexity and/or scale.

Nason: Well, we have a benefit in that we are in locations that have scalable capacity available from the carriers. That includes lift capacity at the airport, trucking capacity for the highway, and, of course, railheads. These are all issues we are sensitive to, when it comes to informing our carriers and other suppliers that we rely on, by giving them estimates of what we expect our volume to be. It gives them the lead time they need to have capacity there for us.

Gardner: I suppose communication is essential. Is there a higher level of integration handoff between your systems and their systems? Is this entering a more automated level?

Nason: It is, year-round. For peak season it doesn’t necessarily change in that form. The process remains. However, we may have multiple pick-ups scheduled throughout the day from our primary carriers, and/or we arrange special holiday calendar scheduling with those carriers for pick-up, perhaps on a Saturday, or twice on Mondays. If they are sensitive to weather or traffic delays, for example, we know the terminals they need to go through.

Gardner: How about returns? Is that something that you work with these carriers on as well? Or is that something you handle separately?

Nason: Returns are a fundamental part of our business. In fact, we do our best to give the customer the confidence of knowing that by purchasing in the secondary market, the transaction is indemnified, and returns are a definite part of our business on a day-to-day basis.

Gardner: What can we expect in the future? Obviously this volume continues, the expectations rise, and people are doing more types of things online. I suppose college students have been brought up with this, rather than it being something they have learned. It’s something that has always been there.

Do you see any prospects in the future for a higher level of technology need or collaboration need, how can we scale even further?

Nason: Constantly, the improvements in technology challenge the process, and managing the complexity is what you weigh against streamlining even further what we have available -- in particular, optimizing inter-modal transport. For example, with fuel costs skyrocketing, and the cost of everyone's time going up, through the use of technology we look for opportunities on back-haul lanes, or in getting partial loads filled before they move, without sacrificing the service interval.

These are the kinds of things that technology allows when it's managed properly. Of course, another layer of technology has to be considered from the complexity standpoint before you can be successful with it.

Gardner: Is there anything in the future you would like to see from such carriers as UPS, as they try to become your top partners on all of this?

Nason: Integration is the key, and by that I mean the features of service that they provide. It’s not simply transportation, it’s the trackability, it’s scaling; both on the volume side, but also in allowing us to give the customer information about the order, when it will be there, or any exceptions. They're an extension of Alibris in terms of what the customer sees for the end-to-end transaction.

Gardner: Fine, thanks. Now we’re going to talk with Andy Quay, the vice president of outbound transportation at QVC.

QVC has been having a very busy holiday peak season this year. And QVC, of course, has had an illustrious long-term play in pioneering, both retail through television and cable, as well as online.

Welcome Andy, and tell us a little bit about QVC and your story. How long you have been there?

Andy Quay: Well, I am celebrating my 21st anniversary this December. So I can say I have been through every peak season.

Although peak season 20 some years ago was nothing compared to what we are dealing with now. This has been an evolutionary process as our business has grown and become accepted by consumers across the country. More recently we’ve been able to develop with our website as well, which really augments our live television shows.

Gardner: Give us a sense of the numbers here. After 21 years this is quite a different ball game than when you started. What sort of volumes and what sort of records, if any, are we dealing with this year?

Quay: Well, I can tell you that in our first year in business, in December, 1986 -- and I still have the actual report, believe it or not -- we shipped 14,600 some-odd packages. We are currently shipping probably 350,000 to 450,000 packages a day at this point.

We've come a long way. We actually set a record this year by taking more than 870,000 orders in a 24-hour period on Nov. 11. This led to our typical busy season through the Thanksgiving holiday to the December Christmas season. We'll be shipping right up to Friday, Dec. 21 for delivery on Christmas.

Gardner: At QVC you sell a tremendous diversity of goods. Many of them you procure and deal with the supply chain yourselves, therefore cutting costs and offering quicker turnaround processing.

Tell us a little about the technology that goes into that, and perhaps also a little bit about what the expectations are now. Since people are used to clicking a button on their keyboard or making a quick phone call and then ... wow, a day or two later, the package arrives. Their expectations are pretty high.

Quay: That’s an excellent point. We’ve been seeing customer expectations get higher every year. More people are becoming familiar with this form of ordering, whether through the web or over the telephone.

I’ll also touch on the technology very briefly. We use an automated ordering system with voice response units that enable my wife, for example, to place an order in about 35 seconds. So that enables us to handle high volumes of orders. Using that technology has allowed us to take some 870,000 orders in a day.

The planning for this allows the supply chain to be very quick. We are like television broadcasts. We literally are scripting the show 24-hours in advance. So we can be very opportunistic. If we have a hot product, we can get it on the air very quickly and not have to worry about necessarily supplying 300 brick-and-mortar stores. Our turnaround time can be blindingly quick, depending upon how fast we can get the inventory into one of our distribution centers.

We currently have five distribution centers, and they are all along the East Coast of the U.S., and they are predominantly commodity driven. For example, we have specific commodities such as jewelry in one facility, and we have apparel and accessories as categories of goods in another facility. That lends itself to a challenge when people are ordering multiple items across commodities. We end up having to ship them separately. That’s a dilemma we have been struggling with as customers do more multi-category orders.

As I mentioned, the scripting of the SKUs for the broadcast is typically 24 hours prior, with the exception of Today's Special Value (TSV) show and other specific shows. We spend a great deal of time forecasting for the phone centers and the distribution carriers to ensure that we can take the orders in volume and ship them within 48 hours.

We are constantly focused on our cycle-time and in trying to turn those orders around and get them out the door as quickly as possible. To support this effort we probably have one of the largest "zone-jumping" operations in the country.

Gardner: And what does "zone-jumping" mean?

Quay: Zone jumping allows me to contract with truckload carriers to deliver our packages into the UPS network. We go to 14 different hubs across the country, in many cases using team drivers. This enables us to speed the delivery to the customer, and we’re constantly focused on the customer.

Gardner: And this must require quite a bit of integration, or at least interoperability in communications between your systems and UPS’s systems?

Quay: Absolutely, and we carefully plan leading up to the peak season we're in now. We literally begin planning this in June for what takes place during the holidays -- right up to Christmas Day.

We work very closely with UPS and their network planners, both ground and air, to ensure cost-efficient delivery to the customer. We actually sort packages for air shipments, during critical business periods, to optimize the UPS network.

Gardner: It really sounds like a just-in-time supply chain for retail.

Quay: It's as close as you can get it. As I sometimes say, it's "just-out-of-time"! We do certainly try for a quick turnaround.

Coming back to what you said earlier, as far as the competition goes it is getting more intense. The customer expectations are getting higher and higher. And, of course, we are trying to stay ahead of the curve.

Gardner: What's the difference between your peak season now and the more regular baseline of volume of business? How much increase do you have to deal with during this period, between late-November and mid- to late-December?

Quay: Well, it ramps up considerably. We can go from a 150,000 to 200,000 orders a day, to literally over 400,000 to 500,000 orders a day.

Gardner: So double, maybe triple, the volume?

Quay: Right. The other challenge I mentioned, the commodity-basis distribution that we operate on -- along with the volatility of our orders -- this all tends to focus on a single distribution center. We spend an inordinate amount of time trying to forecast volume, both for staffing and also planning with our carriers like UPS.

We want to know what buying is going to be shipping, at what distribution center, on what day. And that only compresses even more around the holiday period. We have specific cutoff times that the distribution center operations must hit in order to meet the customers' delivery date. We work very closely on when we dispatch trucks ... all of this leading up to our holiday cutoff sequence this week.

We try to maximize ground service versus the more expensive airfreight. I think we have done a very good job at penetrating UPS’s network to maximize ground delivery, all in an effort to keep the shipping and handling cost to the customers as low as possible.

Gardner: How about the future? Is this trend of that past 21 years sustainable? How far can we go?

Quay: I believe it is sustainable. Our web business is booming, with very high growth every year. And that really augments the television broadcast. We have, honestly, a fair amount of penetration, and we can still obtain more with our audiences.

Our cable broadcast is in 90 million-plus homes that actually receive our signal, but a relatively small portion actually purchase. So that’s my point. We have a long way to go to further penetrate and earn more customers. We have to get people to try us.

Gardner: And, of course, people are now also finding goods via Web search. For example, when they go to search for a piece of apparel, or a retail item, or some kind or a gift -- they might just go to, say, Google or Yahoo! or MSN, and type something in and end up on your web site. That gives you a whole new level of potential volume.

Quay: Well, it does, and we also make the website very well known. I am looking at our television show right now and we’ve have our www.qvc.com site advertised right on it. That provides an extended search capability. People are trying to do more shopping on the web, in addition to watching the television.

Gardner: We have synergies on the distribution side; we have synergies on the acquisition, and of using information and how to engage with partners. And so the technology is really in the middle of it all. And you also expect a tremendous amount of growth still to come.

Quay: Yes, absolutely. And it’s amazing, the different functions within QVC, the synergies that we work together internally. That goes from our merchandising to where we are sourcing product.

You mentioned supply chains, and the visibility of getting into the distribution center. Our merchants and programmers watch that like a hawk so they can script new items on the air. We have pre-scripted hours that we’re definitely looking to get certain products on.

The planning for the television broadcast is something that drives the back end of the supply chain. The coordination with our distribution centers -- as far as getting the operation forecast, staffed and fulfilled through shipping to our customers -- is outstanding.

Gardner: Well, it’s very impressive, given what you’ve done and all of these different plates that you need to keep spinning in the air -- while also keeping them coordinated. I really appreciate the daunting task, and that you have been able to reach this high level of efficiency.

Quay: Oh, we are not perfect yet. We are still working very hard to improve our service. It never slows down.

Gardner: Great. Thanks very much for your input. I have learned a bit more about this whole peak season, what really goes on behind the scenes at both QVC and Alibris. It seems like quite an accomplishment what you all are able to do at both organizations.

Nason: Well, thank you, Dana. Thanks for taking the time to hear about the Alibris story.

Gardner: Sure. This is Dana Gardner, principal analyst at Interarbor Solutions. We have been talking with Mark Nason, the vice president of operations at Alibris, about managing the peak season demand, and the logistics and technology required for a seamless customer experience.

We’ve also been joined by Andy Quay, vice president of outbound transportation, at the QVC shopping network.

Thanks to our listeners for joining on this BriefingsDirect sponsored podcast. Come back and listen again next time.

Listen to the podcast here. Sponsor: UPS.

Transcript of BriefingsDirect podcast on peak season shipping efficiencies and UPS retail solutions. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.

Tuesday, November 13, 2007

BriefingsDirect SOA Insights Analysts Examine Microsoft SOA and Evaluate Green IT

Edited transcript of weekly BriefingsDirect[TM] SOA Insights Edition podcast, recorded October 26, 2007.

Listen to the podcast here.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect SOA Insights Edition, Volume 27. A weekly discussion and dissection of Services Oriented Architecture (SOA) related news and events with a panel of industry analysts, experts and guests.

I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions. We’re joined today by a handful of prominent IT analysts who cover SOA and related areas of technology, business, and productivity.

Topics we're going to discuss this week include the SOA & Business Process Conference held by Microsoft in Redmond, Wash., at which Microsoft announced several product roadmaps and some strategy direction around SOA.

We're also going to discuss issues around "Green SOA." How will SOA impact companies, as they attempt to decrease their energy footprint, perhaps become kinder and gentler to the environment and planet earth, and what SOA might bring to the table in terms of a long-term return on investment (ROI), when energy related issues are factored in?

To help us sort through these issues, we’re joined this week by Jim Kobielus. He is a principal analyst at Current Analysis. Welcome back, Jim.

Jim Kobielus: Hi, Dana. Hello, everybody.

Gardner: We're also joined by Neil Macehiter, principal analyst at Macehiter Ward-Dutton in the UK. Thanks for coming along, Neil.

Neil Macehiter: Hi, Dana. Hi, everyone.

Gardner: Joe McKendrick, an independent analyst and blogger. Welcome back to the show, Joe.

Joe McKendrick: Thanks, Dana, glad to be here.


On Microsoft-Oriented Architecture and the SOA Confab ...

Gardner: Let’s dive into our number one topic today. I call it Microsoft Oriented Architecture -- MOA, if you will -- because what we've been hearing so far from Microsoft about SOA relates primarily to their tools and infrastructure. We did hear this week some interesting discussion about modeling, which seems to be a major topic among the discussions held at this conference on Tuesday, Oct. 30.

It's going to be several years out before these products arrive -- we probably won’t even see data until well into 2008 on a number of these products. Part of the logic seems to be that you can write anywhere, have flexibility in your tooling, and then coalesce around a variety of models or modeling approaches to execute through an über or federated modeling approach that Microsoft seems to be developing. That would then execute or deploy services on Microsoft foundational infrastructure.

I'm going to assume that there is also going to be loosely coupled interoperability with services from a variety of different origins and underlying infrastructure environments, but Microsoft seems to be looking strategically at this modeling layer, as to where it wants to bring value even if it’s late to the game.

Let’s start with Jim Kobielus. Tell us a little bit about whether you view Microsoft's moves as expanding on your understanding of their take on SOA, and what do you make of this emphasis on modeling?

Kobielus: First, the SOA universe is heading toward a model-driven paradigm for distributed service development in orchestration, and that’s been clear for a several years now. What Microsoft has discussed this week at its SOA and BPM conference was nothing radically new for the industry or for Microsoft.

Over time, with Visual Studio and the .NET environment, they've been increasingly moving toward a more purely visual paradigm. "Visual" is in the very name of their development tool. Looking at the news this week from Microsoft on the so-called Oslo initiative, they are going to be enhancing a variety of their Visual Studio, BizTalk Server, BizTalk Services, and Microsoft System Center, bringing together the various metadata repositories underlying those products to enable a greater model-driven approach to distributed development.

Gardner: They get into some BizTalk too, right?

Kobielus: Yes, BizTalk Server for premises-based and BizTalk Services, software as a service (SaaS), the channel through which it can deliver BizTalk functionality going forward. I had to pinch myself and ask myself what year this is. Oh, it’s 2007, and Microsoft is finally getting modeling religion. I still remember in 2003-2004 there was a big up swell of industry interest in model-driven architecture (MDA).

Gardner: We've had some standards developed in the industry since then too, right?

Kobielus: I was thinking, okay, that’s great, Microsoft, I have no problem with your model-driven approach. You're two, three, or four years behind the curve in terms of getting religion. That’s okay. It’s still taking a while for the industry to completely mobilize around this.

In order words, rather than developing applications, they develop business models and technology models to varying degrees of depth and then use those models to automatically generate the appropriate code and build the appropriate sources. That’s a given. One thing that confuses me, puzzles me, or maybe just dismays me about Microsoft’s announcement is that there isn't any footprint here for the actual standards that have been developed like OMG’s unified modeling language (UML), for example.

Microsoft, for some reason I still haven’t been able to divine, is also steering clear of UML in terms of their repositories. I'm not getting any sense that there is a UDDI story here or any other standards angle to these converged repositories that they will be rolling out within their various tools. So, it really is a Microsoft Oriented Architecture. They're building proprietary interfaces. I thought they were pretty much behind open standards. Now, unless it’s actually 2003, I have to go and check my calendar.

Gardner: They did mention that they're going to be working on a repository technology for Oslo metadata, which will apparently be built into its infrastructure services and tools. There was no mention of standards, and part of the conceptual framework around SOA is that there has to be a fairly significant amount of standardization in order to make this inclusion of services within a large business process level of activity possible.

Some of the infrastructure, be it repository, ESB, management, or governance, needs to be quite open. So, you're saying you're not sure that you're seeing that level of openness. It reminds us of the CORBA versus COM and DCOM situation. OMG was involved with that and supported the development of CORBA

Let’s go to Neil Macehiter. Do you see this as MOA or do you think that they are going to have to be open, if it’s going to be SOA values?

Macehiter: I don’t see this as exclusively Microsoft-oriented, by any stretch. I’d also question Jim’s comment on there being nothing radically new here. There are a couple of elements to the strategy that Microsoft’s outlined that differentiate it from the model-driven approaches of the past.

The first is that they are actually encompassing management into this modeling framework, and they're planning to support some standards around things like the service modeling language (SML), which will allow the transition from development through to operations. So, this is actually about the model driven life cycle.

The second element where I see some difference is that Microsoft is trying to extend this common model across software that resides on premises and software that resides in the cloud somewhere with services. So, it has a common framework for delivering, as Microsoft refers to it, software plus services. In terms of the standard support with respect to UML, Microsoft has always been lukewarm about UML.

A few years ago, they were talking about using domain specific language (DSL), which underpin elements of Visual Studio that currently exist, as a way of supporting different modeling paradigms. What we will see is the resurgence of DSL as a means of enabling different modeling approaches to be applied here. The comment regarding UDDI is only one element at the repository, because where Microsoft is really trying to drive this is around a repository for models, for an SML model or for the models developed in Visual Studio, which is certainly broader.

Gardner: There really aren’t any standards for unifying modeling or repository for various models.

Macehiter: No, so this smacks of being a very ambitious strategy from Microsoft, which is trying to pull together threads from different elements of the overall IT environment. You've got elements of infrastructure as a service, with things like the BizTalk Services, which has been the domain of large Web platforms. You've got this notion of computer applications in BPM which is something people like IBM, BEA, Software AG, etc. have been promoting.

Microsoft has got a broad vision. We also mustn’t forget that what underpins this is the vision to have this execution framework for models. The models will actually be executed within the .NET framework in the future iteration. That will be based on the Window’s Communication Foundation, which itself sits on top of the WS-* standards and then also on top of Windows Workplace Foundation.

So, that ambitious vision is still some way off, as you mentioned -- beta in 2008, production in 2009. Microsoft is going to have to bring its ISVs and systems integrator (SI) community along to really turn this from being an architecture that's oriented towards Microsoft to something broader.

Gardner: Now, Neil, if Microsoft is, in a sense, leapfrogging the market, trying to project what things are going to be several years out, recognizing that there is going to be a variety of modeling approaches, and that modeling is going to be essential for making SOA inclusive, then they are also going to be federating, but doing that vis-à-vis their frameworks and foundations.

If there is anything in the past that has spurred on industry standards, it's been when Microsoft puts a stake in the ground and says, “We want to be the 'blank,'” which, in this case, would be the place where you would federate models.

Kobielus: I’m glad you mentioned the word "federation" in this context, because I wanted to make a point. I agree with Neil. I’m not totally down on what Microsoft is doing. Clearly, they had to go beyond UML in terms of a modeling language, as you said, because UML doesn’t have the constructs to do deployment and management of distributed services and so forth. I understand that. What disturbs me right now about what Microsoft is doing is that if you look at the last few years, Microsoft has gotten a lot better when they are ahead of standards.

When they're innovating in advance of any standards, they have done a better job of catalyzing a community of partners to build public specs. For example, when Microsoft went ahead of SAML and the Liberty Alliance Federated Identity Standards a few years back, they wanted to do things that weren't being addressed by those groups.

Microsoft put together an alliance around a spec called WS-Federation, which just had sort of hit-and-miss adoption in the market, but there has been a variety of other WS-* standards or specifications that Microsoft has also helped to catalyze the industry around in advance of any formal, de-jure standard. I'd like to see it do the same thing now in the realm of modeling.

Macehiter: My guess is that’s exactly what they’re doing by putting a stake in the ground this early. "This is coming from us. There are going to be a lot of developers out there using our tools that are going to be populating our repositories. If you're sensible, you're going to federate with us and, therefore, let’s get the dialogue going." I think that’s partly why the stake is out there as early as it is.

Gardner: Let’s go to Joe McKendrick, Joe, we've seen instances in the past where – whether they're trailing or leading a particular trend or technology -- Microsoft has such clout and influence in the market that they either can establish de-facto standards or they will spur others to get chummy with one another to diminish the Microsoft threat. Do you expect that Microsoft's saying they're going to get in the modeling federation and repository business will prompt more cooperation and perhaps a faster federated standard approach in the rest of the market?

McKendrick: Definitely more and more competitive responses. Perhaps you’ll see IBM, BEA, Oracle, or whatever other entity propose their own approaches. It's great that Microsoft is talking SOA now. It's only been about a year that they have really been active.

Gardner: They didn’t even want to use the acronym. Did they?

McKendrick: I think what's behind this is that Microsoft has always followed the mass market. Microsoft’s sweet spot is the small- and the medium-business sector. They have a presence in the Fortune 500, but where they’ve been strong is the small to medium businesses, and these are the companies that don’t have the resources to form committees and spend months anguishing over an enterprise architectural approach, planning things out. They may be driven by the development department, but these folks have problems that they need to address immediately. They need a focus and to put some solutions in place to resolve issues with transactions, and so forth.

Gardner: That’s interesting, because for at least 10 years Microsoft has had, what shall we say, comprehensive data center envy. They've seen themselves on a department level. They've been around the edges. They've had tremendous success with the client and productivity applications with some major components, including directory and just general operating system level to support servers, and, of course, their tools and the frameworks.

However, there are still very few Fortune 500 or Global 2000 companies that are pure Microsoft shops. In many respects, enterprise Java, distributed computing, and open-standards approaches have dominated the core environment in architecture for these larger enterprises. If Microsoft is going to get into SOA, they're in a better position to do what we’ve been calling Guerrilla SOA, which is on a project-by-project basis.

If you had a lot of grassroots, small-developer, department-level server-oriented activities that Microsoft infrastructure would perhaps be better positioned to be dominant in, then that’s going to leave them with these islands of services. A federated modeling level or abstraction layer would be very fortuitous for them. Anyone have any thoughts about the comprehensive enterprise-wide SOA approach that we have heard from others vendors, versus what Microsoft might be doing, which might not be comprehensive, but could be in a sense grassroots even within these larger enterprise.

Macehiter: The other vendors in the non-Microsoft world might talk about enterprise-wide SOA initiatives and organizations that are planning to adopt SOA on an enterprise-wide basis, based on their infrastructure. The reality is that the number of organizations that have actually gone that far is still comparatively small, as we continually see with the same case-study customers being reintroduced again and again.

Microsoft will have to adopt an alternative model. For example, I think Microsoft will follow a similar model and explore the base they had around the developer community within organizations with things like Visual Studio.

SQL Server is pretty well deployed in enterprise elements of the application platform, by virtue of their being bundled into the OS already. So, they're quite well-positioned to address these departmental opportunities, and then scale out.

This is where some of the capabilities that we talked about, particularly in combination with things like BizTalk Services, allow organizations to utilize workflow capabilities and identity management capabilities in the cloud to reduce the management overhead. The other potential route for Microsoft is through the ISV community.

Gardner: I suppose one counterpoint to that is that Microsoft is well positioned with it's tools, frameworks, skill set, and entrenched positions to be well exploited for creating services, but when it comes to modeling business processes, we're not really talking about a Visual Studio-level user or developer. Even if the tools are visually oriented, the people or teams that are going to be in a position to construct, amend, develop, and refine these business processes are going to be at a much higher level. They're going to be architects and business analysts. They're going to be different types of persons.

They are going to be people who have a horizontal view of an entire business process across the heterogeneous environments and across organizational boundaries. Microsoft is well positioned within these grassroots elements. I wonder if they can, through a modeling federation layer and benefit, get themselves to the place where they are going to be the tools and repository for these analysts and architect level thinkers.

Kobielus: I think they will, but they need to play all this Oslo technology into their dynamics strategy for the line of business applications. The analysts that operate in the dynamics world are really the business analyst, the business process re-engineering analysts, etc., who could really use this higher layer entity modeling environment that Microsoft is putting there. In other words, the analysts we are discussing are the analysts who work in the realm of the SAP or Oracle applications, or the dynamic applications, not the departmental database application developers.

Macehiter: The other community there would be the SIs, who do a lot of this work on behalf of organizations. As part of the Oslo messaging, Microsoft has talked about this sort of capability being much more model-driven than a high level of abstraction, as a means to allow SOAs to become more like all ISVs, in terms of delivering more complete solutions. That’s another key community, where Microsoft just doesn't compete, in contrast to IBM, which is competing directly with the likes of Accenture and CapGemini. That’s another community that Microsoft will be looking to work very closely with around this.

Gardner: In the past, Microsoft did very well by targeting the hearts and minds of developers. Now, it sounds like they are going to be targeting the hearts and minds of business analysts, architects, and business-process level oriented developers. Therefore, they can position themselves as a neutral third party in the professional services realm. They can try to undermine IBM’s infrastructure and technology approach through this channel benefit of working with the good tooling and ease of deployment at the modeling and business-process construct level with these third-party SIs. Is that it?

McKendrick: As an addendum to what you just said, Microsoft isn't necessarily going to go directly after customers of IBM, BEA, etc. IBM is providing this potential to companies that have been under-served, companies that cannot afford extensive SOA consulting or integration work. It's going after the SMB sector, the Great Plains, the dynamics application that Jim spoke of. Those are SMB application. The big companies will go to SAP.

Gardner: So, Microsoft could have something that would be a package more amenable to a company, of say, 300-to-2,000 seats, maybe even 300-to-1,000.

McKendrick: Exactly, Microsoft is the disrupter in this case. There are other markets where Microsoft is being disrupted by Web 2.0, but in SOA, Microsoft is playing the role of disrupter and I think that’s what their strategy is.

Kobielus: I want to add one last twist here. I agree with everything Joe said. Also, the Oslo strategy, the modeling tools, will become very important in Microsoft’s overall strategy for the master data management (MDM) market they have announced already. A year from now, Microsoft will release their first true MDM product that incorporates, for example, the hierarchy and management and cross-domain catalog, management capabilities from their strategic acquisitions.

What Microsoft really needs to be feature-competitive in the MDM market is a model-driven, visual business-process development and stewardship tool. That way teams of business and technical analysts can work together in a customer data-integration, product information-management, or financial consolidation hub environment to build the complex business logic into complex applications under the heading of MDM. If Microsoft's MDM team knows what they are doing and I assume they do, then they should definitely align with the Oslo initiative, because it will be a critical for Microsoft to compete with IBM and Oracle in this phase.

Gardner: As we've discussed on this show, the whole data side of SOA in creating common views, cleaning and translating, schemas and taxonomies, and MDM is extremely important. You can’t do SOA well, if you don’t have a coherent data services strategy. Microsoft is one of the few vendors that can provide that in addition to many of these other things that we're discussing. So, that’s a point well taken. Now, to Joe’s point about the SMB Market, not only would there be a do-it-yourself, on-premises approach to SOA, but there are also SaaS and wire-based approaches.

We've heard a little bit about a forthcoming protocol -- BizTalk Services 1 -- and that probably will relate to Microsoft's Live and other online-based approaches. The end user, be they an architect and analyst or someone who is going to be crafting business processes, if they're using a strictly Web-based approach, they don’t know or care what’s going on beneath the covers in terms of foundations, frameworks, operating systems, and runtime environments. They are simply looking for ease of acquisition and use productivity-scale reliability.

It strikes me that Microsoft is now working towards what might be more of a Salesforce.com, Google, or Amazon-type environment, where increasingly SOA is off the wire entirely. It really is a matter of how you model and tool those services that becomes a king maker in the market. Any thoughts on how Microsoft is positioning these products for that kind of a play?

Macehiter: Definitely. The software plus services, which is the way that Microsoft articulates this partitioning of capability between on-premise software and services delivered in the cloud, is definitely a key aspect of the Oslo strategy and BizTalk Services. It’s just one element of that.

For example, if an organization needs to do some message flow that crosses between organizations over the firewall, BizTalk Services will provide a capability that allows you to explore that declaratively. You can see that evolving, but that’s more infrastructure services. Clearly, another approach might be a high-level service, an application type service, and this architecture that Microsoft is talking about is attempting to address that as well.

This is definitely a key element of the story, which is about making sure that Microsoft remains relevant in the face of in an increasing shift, particularly in the SMB market, towards services delivered in the cloud. It’s about combining the client, the server and services, and providing models in terms the way you think about the applications you need and in terms of the way you manage and deploy them that can encompass that in a way that doesn’t incur significant effort.

Gardner: Perhaps the common denominator between the on-premises approach -- be it departmental level, enterprise-wide, SMB, through the cloud, or though an ecology of providers -- is at this modeling layer. This is the inflection point where, no matter how you do SOA, you’re going to want to be in a position to do this well, with ease, and across a variety of different approaches. Is that fair?

Macehiter: Yes. That’s why this is a better attempt by Microsoft to change the game and push the boundaries. It’s not just a model-driven development revisited in a NDI and a .NET world. This is broader than that.

Gardner: This is classic Microsoft strategy, leapfrogging and trying to get to what the inflection point or the lock-in point might be, and then rushing to it and taking advantage of its entrenched positions.

McKendrick: Forming the mass market, exactly.

Gardner: Let’s move on to our next subject, now that we’ve put that one to rest. The implications are that Microsoft is not out of the SOA game, that it's interested in playing to win, but, once again, on its own terms based on its classic market and technology strategies.

McKendrick: And reaching out to companies that could not afford SOA or comprehensive SOA, which it's done in the past.


On Green SOA and the IT Energy-Use Factor ...

Gardner: Let’s move on to our new subject, Green SOA. SOA approaches and methodologies bring together abstractions of IT resources, developing higher level productivity through business process, management, organization, and governance. How does that possibly impact Green IT?

It's a very big topic today. In fact, it was the top of the strategic technology areas that Gartner Group identified for 2008. Green IT was named number one, a top-ten strategic technology area. How does SOA impact this? Jim Kobielus, you have been given this a lot of thought. Give us the lay of the land.

Kobielus: Thank you, Dana. Clearly, in our culture the Green theme keeps growing larger in all of our lives, and I'm not going to belabor all the ramifications of Green. In terms of Green, as it relates to SOA, you mentioned just a moment ago, Dana, the whole notion of SOA is based on abstraction, service contracts, and decoupling of the external calling interfaces from the internal implementations of various services. Green smashes through that entire paradigm, because Green is about as concrete as you get.

SOA focuses on maximizing the sharing, reuse, and interoperability of distributed services or resources, application logic, or data across distributed fabrics. When they're designing SOA applications, developers aren't necessarily incentivized, or even have the inclination, to think in terms of the ramifications at the physical layer of these services they're designing and deploying, but Green is all about the physical layer.

In other words, Green is all about how do human beings, as a species, make wise use and stewardship of the earth’s nonrenewable, irreplaceable resources, energy or energy supplies, fossil fuels, and so forth. But also it’s larger than that, obviously. How do we maintain a sustainable culture and existence on this planet in terms of wise use of the other material resources like minerals and the soil etc.?

Gardner: Isn't this all about electricity, when it comes to IT?

Kobielus: Yes, first and foremost, it’s pitched at the energy level. In fact, just this morning in my inbox I got this from IBM: "Join us for the IBM Energy Efficiency Certificate Announcement Teleconference." They're going to talk about energy efficiency in the datacenter and best practices for energy efficiency. That’s obviously very much at the core of the Green theme.

Now, getting to the point of how SOA can contribute to the greening of the world. SOA is the whole notion of consolidation -- consolidation of application logic, consolidation of servers, and consolidation of datacenters. In other words, it essentially reduces the physical footprint of the services and applications that we deploy out to the mesh or the fabric.

Gardner: Aren't those things independent of SOA? I mean, if you're doing datacenter consolidation and modernization, if you are moving from proprietary to standards-based architectures, what that has got to do with SOA?

Kobielus: Well, SOA is predicated on sharing and reuse. Okay, your center has a competency. You have one hunk of application logic that handles order processing in the organization. You standardize on that, and then everybody calls that, invokes that over the network. Over time, if SOA is successful other centers of development or other deployed instances of code that do similar things will be decommissioned to enable maximum reuse of the best-of-breed order-processing technology that’s out there.

As enterprises realize the ROI, the reuse and sharing should naturally lead to greater consolidation at all levels, including in the datacenter. Basically, reducing the footprint of SOA on the physical environment is what consolidation is all about.

Gardner: So, these trends that are going concurrently -- unification, consolidation, and virtualization -- allow you to better exploit those activities and perhaps double down on them in terms of a fewer instances of an application stack, but more opportunity to reuse the logic and the resources more generally. So a highly efficient approach that ultimately will save trees and put less CO2 in the atmosphere.

Kobielus: I want to go back to Microsoft. Four years ago, in 2003, I went to their analyst summit in Redmond. They presented something they called service definition modeling language (SDML) as proprietary spec and a possible future spec for modeling services and applications at the application layer and physical layer. An application gets developed, it gets orchestrated, it gets distributed across different nodes, and it allows you to find the physical partitioning of that application across various servers. I thought:

That’s kind of interesting. They are making a whack at both trying to model from the application down to the physical layer and think through the physical consequences of application development activities.

Gardner: Another trend in the market is the SaaS approach, where we might acquire more types of services, perhaps on a granular level or wholesale level from Google, Salesforce, Amazon, or Microsoft, in which case they are running their datacenters. We have to assume, because they're on a subscription basis for their economics, that they are going to be highly motivated toward high-utilization, high-efficiency, low-footprint, low-energy consumption.

That will ultimately help the planet, as well, because we wouldn’t have umpteen datacenters in every single company of more than a 150 people. We could start centralizing this almost like a utility would. We would think that these large companies, as they put in these massive datacenters, could have the opportunity for a volume benefit in how they consume and purchase energy.

Gardner: Neil Macehiter, what do you make of this Green-SOA relationship?

Macehiter: We need to step back and look at what we are talking about. You mentioned ROI. If we look at this from a Green ROI perspective, organizations are not going to be looking at SOA as the first step in reducing their Green footprint. It's going to be about server and storage consolidation to reduce the power consumption, provide more efficient cooling, and management approaches to ensure that servers aren’t running when they don’t need to be. That’s going to give them much bigger Green bang for the buck.

Certainly, the ability to reuse and share services is going to have an impact in terms of reducing duplications, but in the broader scheme of things I see that contribution as being comparatively small. The history that we have is largely ignoring the implications of power and heat, until we get to the size of a Google or a Microsoft, where we have to start thinking about putting our datacenters next to large amounts of water, where we can get hydroelectric power.

So, IT has a contribution to make, but there isn't anything explicit in SOA approaches, beyond things like service reuse and sharing that can really contribute. The economies of scale that you get from SaaS in terms of exploiting those services come from more effective use of the datacenter resources. This is those organizations' business, and, given the constraints they operate under, they can’t get datacenters big enough, because then there are no power stations big enough.

Gardner: Your point is well taken. Maybe we're looking at this the wrong way. Maybe we’ve got it backwards. Maybe SOA, in some way, aids and abets Green activities. Maybe it's Green activities, as they consolidate, unify, seek high utilization, and storage, that will aid and abet SOA. As Gartner points out, in their number one strategic technology area for 2008, Green initiatives are going to direct companies in the way that they deploy and use technology towards a situation where they can better avail themselves of SOA principles. Does that sound right, Joe McKendrick?

McKendrick: In an indirect way, it sounds right, but I want to take an even a further step back and look at what we have here. Frankly, the Green IT initiative is misguided and the wrong questions are being asked about Green IT. Let me say that I have been in active environmental causes and I have done consulting work with a company that has worked with utilities and ERP Electric Car Research Institute on energy saving initiatives.

It's great that IT is emphasizing efficient datacenters, but what we need to look at is how much energy IT has saved the world in general? How much power is being saved as a result of IT initiatives? SOA rolls right into this. For example, how many business trips are not being taken now, because of the availability of video conferencing and remote telecommuting, telework and things of that sort? We need studies. I don’t have the data on this and there isn’t any data out there that has really tracked this. In e-commerce, for example, how many stores have not been built because of e-commerce?

Gardner: These are really good points that the overall amount of energy consumption in the world would be much greater and productivity. It's very difficult to put all the cookie crumbs together and precisely measure the inputs and outputs, but that’s not really the point. We're not talking about what we would have saved, if we didn’t have IT for saving. What can we do to refine even further that what which we have to use to create the IT that we have?

Macehiter: The reality is that we can’t offset what we’ve saved in the past against what we are going to conceive in the future. We are at a baseline and it is not about apportioning blame between industries and saying, "Well, IT doesn’t have to do so much, because we’ve done a lot in the past."

McKendrick: But, we are putting demands on IT, Neil. We're putting a lot of demands on IT for additional IT resources.

Macehiter: If you go into a large investment bank, and look at what proportion of their electricity consumption is consumed by IT, I'd hazard a guess that it's a pretty large chunk, alongside facilities.

McKendrick: And probably lots of demands are put on those datacenters, but how much energy is that saving because of additional services being put out to the world, being put out to society?

Gardner: What's your larger point, Joe, that we don’t need to worry too much about making IT more energy efficient because it's already done such a great job compared to the bricks-and-mortar, industrialized past?

McKendrick: The problem is, Dana, we don’t know. There are no studies. I'd love to see studies commissioned. I'd love to see our government or a private foundation fund some studies to find out how much energy IT has been saving us.

Kobielus: I agree with everything you guys are saying, because the issue is not so much reducing IT’s footprint on the environment. It’s reducing our species' overall footprint on the resources. One thing to consider is whether we have more energy-efficient datacenters. Another thing to consider is that, as more functionality gets pushed out to the periphery in terms of PCs and departmental servers, the vast majority of the IT is completely outside the datacenter.

Gardner: Jim, you are really talking about networked IT, so it's really about the Internet, right? The Internet has allowed for a "clicks in e-commerce" and not a "bricks in heavy industries" approach. In that case, we're saying it's good that IT in the Internet has given us the vast economies of scale, productivity, and efficiency, but that also requires a tremendous amount of electricity. So, isn’t this really an argument for safe nuclear and to put small nuclear reactor next to datacenters and perhaps not create CO2?

Macehiter: Let's not forget that this isn't just about enterprise use of IT. If I look at my desk, as a consumer of IT, I've got a scanner, hard disk, two machines, screen, two wireless routers, and speakers that are all consuming electricity. Ten years ago, I just wouldn’t have had that. So, we have to look broader than the enterprise. We can get into a whole other rat’s nest, if we start into safe nuclear power or having wind farms near our datacenter.

Gardner: It's going to be NOA, that’s Nuclear-Oriented Architecture…

Kobielus: In the Wall Street Journal this morning, there was an article about Daylight Saving Time. This year, in the US, Daylight Saving Time has been moved up by a week at the beginning in March and moved back by a week into November. So, this coming Sunday, we are going to finally let our clocks fall back to so-called Standard Time.

The article said that nobody has really done a study to show whether we are actually saving any energy from Daylight Saving Time? There have been no reliable studies done. So, when the legislatures change these weeks, they're just assuming that, by having more hours of daylight in the evening, we are using less illumination, therefore the net budget or net consumption of energy goes down.

In fact, people have darker mornings, and people tend to have more morning-oriented lives. People in the morning quite often are surfing the Web, and viewing the stuff on their TiVo, etc. So, net net, nobody even knows with Daylight Saving Time whether it's Green friendly, as a concept.

Gardner: Common sense would lead you to believe that you’re just robbing Peter to pay Paul on this one, right? Perhaps there are some lessons to be learned on that same level for IT. We think we're saving footprints in data centers and we are consolidating and unifying, but we are also bringing more people online and they have larger energy-consuming desktop environments or small-office environments that Neil described. If there are 400 million people with small offices and there are a billion people on the Internet, then clearly the growth is far and away outstripping whatever efficiencies we might bring to the table.

McKendrick: The efficiencies gained by IT might be outstripping any concerns about green footprints with datacenters. We need data. We need studies to look at this side of it. The U.S. Congress is talking about studying the energy efficiency of datacenters, and you can imagine some kind of regulations will flow from that.

Kobielus: I'm going to be a cynic and am just going to guess that large, Global 2000 corporations are going to be motivated more by economics than altruism when it comes to the environment. So back to the announcement today, on Nov. 2, about IBM launching an initiative to give corporate customers a way to measure and potentially monetize energy efficient measures in their datacenters.

I think IBM is trying to come up with the currency of sorts, a way to earn energy-efficient certificates that can then apply some kind of an economic incentive and/or metric to this issue. As we discussed earlier, the Green approach to IT might actually augment SOA, because I don’t think SOA leads to Green, but many of the things you do for Green will help people recognize higher value from SOA types of activities.

Gardner: Let's leave it at that. We're out of time. It's been another good discussion. Our two topics today have been the Microsoft SOA conference and abstract relationship between Green IT and SOA. We have been joined with our great thinkers and fantastic contributors here today including Jim Kobielus, principal analyst at Current Analysis. Thanks, Jim.

Kobielus: Thank you, Dana. I enjoyed it as always.

Gardner: Neil Macehiter, principal analyst at Macehiter Ward-Dutton. Thanks, Neil.

Macehiter: Thanks, Dana. Thanks, everyone.

Gardner: And, Joe McKendrick, the independent analyst and blogger extraordinaire. Thanks, Joe.

McKendrick: Thanks, Dana. It was great to be here.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You’ve been listening to BriefingsDirect SOA Insights Edition, Volume 27. Come back again next time. Thank you.

Listen to the podcast here.

Produced as a courtesy of Interarbor Solutions: analysis, consulting and rich new-media content production.

If any of our listeners are interested in learning more about BriefingsDirect B2B informational podcasts or to become a sponsor of this or other B2B podcasts, please fill free to contact Interarbor Solutions at 603-528-2435.

Transcript of BriefingsDirect SOA Insights Edition podcast, Vol. 27, on Microsoft SOA and Green IT. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.