Tuesday, February 05, 2008

New Ways Emerge to Improve IT Operational Performance While Heading Off Future Datacenter Reliability Problems

Transcript of BriefingsDirect podcast on IT operational performance using Integrien Alive.

Listen to podcast here. Sponsor: Integrien.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, a sponsored podcast discussion about new ways to improve IT operational performance, based on real-time analytics and the ability to effectively compare data-center performance from a normal state to something that is going to be a problem. We’re going to look at the ability to get a “heads-up” that something is about to go wrong, rather than going into firefighting mode.

Today’s complexity in IT systems is making previous error prevention approaches for operators inefficient and costly. IT staffs are expensive to retain, and are increasingly hard to find. So even when operators have a sufficient staff, a quality staff, it simply takes too long to interpret and resolve IT failures and glitches, given the complexity of distributed systems.

There is also insufficient information about what’s going on in the context of an entire systems setup, and operators are using manual processes -- in firefighting mode -- to maintain critical service levels.

IT executives are therefore seeking more automated approaches to not only remediate problems, but also to get earlier detection. These same operators don’t want to replace their systems management investments, they want to better use them in a cohesive manner to learn more from them, and to better extract the information that these systems emit.

To help us better understand the problems and some of the new solutions and approaches to remediation and detection of IT issues, we’re joined by Steve Henning, the Vice President of Products for Integrien. Welcome to the show, Steve.

Steve Henning: Thanks a lot, Dana.

Gardner: Let’s take a look at some of the real-life issues that are affecting IT operators, drill down into them a bit, look at some of the solutions and benefits, and perhaps some examples of what these bring in terms of relief and increased savings of time and energy.

Tell me a little bit about complexity and problems. How do you view the current state of affairs in the datacenter operations field?

Henning: It’s a dichotomous situation for the vice president of IT operations at this point. On one hand, they're working at growing companies. They need to manage more things in their environment -- devices and resources. Also, given the changes and how people are deploying applications today, they are dealing with more complexity as well.

Service oriented architecture (SOA) and virtualization increase the management problem by at least a factor of three. So you can see that this is a more complex and challenging environment to manage.

On the other side of this equation is the fact that IT operations is being told to either keep their budgets static or to reduce them. Traditionally, the way that the vice president of IT operations has been able to keep the problems from occurring in these environments has been by throwing more people at it. We now see 70-plus-percent of the IT operations budget spent on labor costs.

Just the other day, I was talking to the vice president of IT operations of a large online financial company. He told me that he had 10 people on staff just to understand the normal behavior of their systems. They are literally cutting out graphs and holding them up to the light to compare them against what they have seen in previous incarnations of the system, trying to see when the behavior of this system is normal.

He told me that this is just not scalable. There is no way -- given the fact that he has to scale his infrastructure by a factor of three over the next two years -- that he can possibly hire the people that he would need to support that. Even if he had the budget, he couldn’t find the people today.

So it’s a very troubling environment these days. It’s really what’s pushing people toward looking at different approaches, of taking more of a probabilistic look, measuring variables, and looking at probable outcomes -- rather than trying to do things in a deterministic way, measuring every possible variable, looking at it as quickly as possible, and hoping that problems just don’t slip by.

Gardner: It seems as if we're looking at both a quality and a quantity issue here. We've got a quantity of outputs from these different systems, many times in different formats, but what we really need to do is find that “needle in the haystack” to detect the true issue that’s going to create a failure.

Do you agree that we are dealing with both quality and quantity issues?

Henning: Absolutely. If you look at most of the companies that we talk to today, they are mired in these monitoring events. Most of the companies we talk to have multiple monitoring tools, and they're siloed. You've got the network guys using one tool. You've got the OS and hardware guys using another. The app guys and database guys have their tools, and there is no place where all of this data is analyzed holistically.

Each system emits sets of events typically based on arbitrary hard thresholds that have been set in the environment. There's this massive manual effort of looking at these individual events that are coming from these systems and trying to determine whether they are the actual precursors to real problems, or if they're just a normal behavior of the system that can be ignored. It’s very difficult to keep your hands around that.

Gardner: I suppose it wasn’t that long ago where you could have specialists that would oversee different specific aspects of the IT infrastructure, and they would just be responsible for maintaining that particular part. But, as you mentioned, we have SOA, virtualization, datacenter consolidation, and finding ways of reducing total costs that, in effect, accelerate the interdependencies. I suppose we need more specialization, but -- at the same time -- those specialists need to communicate with the rest of the environment, or the people running it.

Henning: If you look at the applications that are being delivered today, monitoring everything from a silo standpoint and hoping to be able to solve problems in that environment is absolutely impossible. There has to be some way for all of the data to be analyzed in a holistic fashion, understanding the normal behaviors of each of the metrics that are being collected by these monitoring systems. Once you have that normal behavior, you’re alerting only to abnormal behaviors that are the real precursors to problems. That’s where Integrien comes in.

Gardner: You mentioned that you've got reams and reams of events pouring in, and that, in many cases, people are sifting through these manually, charting them, and then comparing them in sort of a haphazard way. What sort of solutions or alternatives are there?

Henning: One of the alternatives is separating the wheat from the chaff and learning the normal behavior of the system. If you look at Integrien Alive, we use sophisticated, dynamic thresholding algorithms. We have multiple algorithms looking at the data to determine that normal behavior and then alerting only to abnormal precursors of problems.

It’s really the hard-threshold-based monitoring that’s the issue here, because hard-threshold-based monitoring does two things. One, it results in alert storms for perfectly normal behavior. Two, it masks real problem behavior that you just can't catch with hard thresholds.

For example, let’s say that at 9 p. m. some online system's normal behavior is a set of servers it would be at 10 percent CPU utilization. But let’s say that it’s at 60 percent utilization. If you have your hard threshold set at 80 percent, you've got a pending problem that you have no idea about. That’s why it’s so important to have an adaptive learning mechanism for determining behavior and when something is important enough to raise to an operator.

Gardner: When you're able to do this comparison on the basis of, "Hey, this is deviating from a pattern," rather than a binary-basis, on-off problem, what kind of benefits can people derive?

Henning: Well, you're automating this massive manual effort that I was talking about. If you look at that vice president of IT operations of the online financial company I talked about earlier, he has 10 guys who are sitting around doing nothing but analyzing this data all day.

Now, that data analysis can be completely automated with sophisticated dynamic thresholding. These 10 guys are freed up to do real problem solving, rather than just looking at these event storms, trying to figure out what’s important and what’s not, when the company is having an issue with one of their mission-critical systems.

Gardner: Do you have any examples of how effective this has been for companies, if they start to take that manpower and focus it where it's most effective? What kind of paybacks are we talking about?

Henning: We see up to a 95 percent reduction in this manual effort around setting thresholds and dealing with events. So it’s a huge reduction in time. We see up to a 50 percent reduction in the time it takes to solve problems, because this kind of information, and the fact that we consolidate alerts based on topology, which makes it much quicker to get down to where the root cause of the problem is, and to focus efforts there.

Gardner: You mentioned getting this “normal state,” of gathering enough information and putting it in the context of use scenarios. How do operators do that? How do they know what’s going to lead to problems by virtue of detecting baseline?

Henning: If you look at most IT environments today, the IT people will tell you that three or four minutes before a problem occurs, they will start to understand that little pattern of events that lead to the problem.

But most of the people that I speak to tell me that’s too late. By the time they identify the pattern that repeats and leads to a particular problem -- for example, a slowdown of a particular critical transaction -- it’s too late. Either the system goes down or the slowdown is such that they are losing business.

We found these abnormal behaviors are the earliest precursors to problems in the IT environment -- either slowdowns or applications actually going down. Once you've learned the normal behavior of the system, these abnormal behaviors far downstream of where the problem actually occurs are the earliest precursors to these problems. We can pick up that these problems are going to occur, sometimes an hour before the problem actually happens.

If you think about a typical IT environment, you're talking about tens of thousands of servers and hundreds of thousands, even millions, of metrics to correlate all that data and understand the relationships between different metrics and which lead up to problems. It’s really a humanly unsolvable problem. That’s where this ability to “connect the dots” -- this ability to model problems when they occur -- is a really important capability.

Gardner: I suppose we’re talking about some fairly large libraries of models to compare and contrast -- something that is far beyond the scale of 5 or 10 people.

Henning: Yes, but these models are learned based on the environment, understanding the normal behaviors of all the metrics in a particular IT operation, and understanding what the key indicators of business performance are.

For example, you might say that if this transaction ever takes more than five seconds, then I know I have a problem. Or you could say that if this database metric, open cursors, goes above 1,000, I know I have a problem. Once you understand what those key indicators are, you can set them. And when you have those, you can actually capture a model of what this problem looks like when that key indicator is exceeded.

That’s the key thing, building this model, having the analytic capability to be able to connect the dots and understand what the precursors that lead up to problems, even an hour before the problem occurs. That’s one of the things that Integrien Alive can do.

Gardner: What sort of benefits do we get from this deeper correlation of what’s good, what’s bad, and what’s gray and that could become bad? Are we talking about minutes or days? What sort of impact does this have on the business?

Henning: We see a couple of things. One is that it’s solving this massive data correlation issue that right now is very limited in the IT operations that we go into. There are just a few highly trained experts who have “tribal knowledge” of the application, and who know even the beginnings of what these correlations are. With a product like Integrien Alive you can solve that kind of massive data correlation issue.

The second benefit of it is that the first time a problem occurs, the capture of a model of the problem, with all the abnormal behaviors that led up to it, can often target for you the places in the applications that are performing abnormally and are likely to be the causes of the problem.

For example, you might find that a particular problem is showing abnormal behavior in the application server tier and the database tier. Now, there's no reason to get on the phone with the network guy, the Web server guy, and other people who can't contribute to the resolution of that problem. By targeting and understanding which metrics are behaving abnormally, can get you to a much quicker mean-time to identify and repair the problem. As I said, we see up to 50 percent reduction in the time it takes to resolve problems.

The final thing is the ability to get predictive alerts, and that’s kind of the nirvana of IT operations. Once you’ve captured models of the recurring problems in the IT environment, a product like Integrien Alive can see the incoming stream of real-time data and compare that against the models in the library.

If it sees a match with a high enough probability it can let you know ahead of time, up to an hour ahead of time, that you are going to have a particular problem that has previously occurred. You can also record exactly what you did to solve the problem, and how you have diagnosed it, so that you can solve it.

Gardner: Then, you can share that. Now, you mentioned “tribal knowledge.” It sounds like we are taking what used to reside in wetware -- in people’s minds and experience. Instead of having to throw those people at a problem without knowing the depth of the problem, or even losing that knowledge if they walk out the door, we're saying, "Enough of that. Let’s go and instantiate this knowledge into the systems and become less dependent on individual experienced people."

Henning: The way I look at it is that we're actually enhancing the expertise of these folks. You're always going to need experts in there. You’re always going to need the folks who have the tribal knowledge of the application. What we are doing, though, is enabling them to do their job better with earlier understanding of where the problems are occurring by adding and solving this massive data correlation issue when a problem occurs.

Even the tribal experts will tell you that just a few minutes before a problem occurs they can start to see the problem. We are offering them a solution that allows them to see this problem forming up to an hour ahead of time, notifying them of abnormal behavior and patterns of behavior that would be seemingly unrelated to them based on their current knowledge of the application.

Gardner: When you do resolve a problem and capture that and make that available for future use, that sounds more like a collaboration issue. How do we deal with so many inputs, so much information, not only on the receiving end, but on the outgoing end, after a resolution?

Henning: This is what we were talking about before. You’ve got all of the siloed sources of monitoring data and alerts, and there's currently no way to consolidate that data for holistic problem solving. So, it’s very important that any kind of solution can integrate a wide variety of monitoring tools, so that all the data can be in one place and available for this kind of collaborative problem solving.

For example, in one environment that we went into we had an alert that went to an application server administrator. He happened to notice that there was a prediction that a database key indicator was going out of its normal range, which would have caused a crash of the database with 85 percent probability in 15 minutes. Armed with that information, he got the alert over to the database administrator who was made able to make some configuration changes that staved off the problem.

Being able to analyze this data holistically and being able to share the data that’s typically been in the siloed monitoring solutions allows quicker and more collaborative problem resolution. We're really talking about centralizing and automating data analysis across the silos of IT.

Gardner: It also reminds me, conceptually, of SOA, where you want to transform the information into a form that can be used generally. It sounds like you are doing that and applying it to this whole notion of IT management and remediation.

Henning: Very much so. There are seemingly unrelated things happening within an application infrastructure that can result in a problem. The fact that all the data is analyzed in a single place holistically through these statistical algorithms, allows us to provide an interface where people can work together and collaborate. This makes the team more effective and makes it much easier for people to solve problems quickly.

Gardner: So, we standardize gathering and managing the information. We also standardize the way in which people can access it and use it, so that they are not fixing the same broken wheel over and over again at different times. It can recognize when they are going to need to do it and have it fixed ready to go. This sounds like a real big saver when it comes to labor and lowering costs for your staff, but also gets that root saving around no downtime or reduced downtime.

Henning: Right. When we typically work with customers, most of the IT operations folks that we talk to are really concerned with reducing the labor costs and reducing the time to identify and resolve the problem. In truth, the real benefit to the business is really removing downtime and removing slowdowns of the applications that cause you to lose or reduce business.

So although we see major benefits of real-time analytic solutions in providing reduction in labor costs, we also say that it’s a very big boon to the business, in terms of keeping the applications effectively generating revenue.

Gardner: Another current trend is the ability to gather interface views, graphical views of the system. There are a lot of dashboards out there for business issues. What do we get in terms of visibility for end-to-end operations, even in a real-time or close to real-time setting from the Integrien Alive that you are describing?

Henning: Once again, it’s still a real issue when you have siloed monitoring tools. Even though a lot of companies have a manager of managers, that’s typically used by the level-one operations folks to filter through the alerts and determine who they need to be passed off to, who can actually take a look at them and resolve them. But, we find that most of the companies that we talk to don’t have any tools that allow them to be efficient in role-based problem solving.

One of the things that Integrien Alive provides is this idea of customizable role-based dashboards, this library of custom analysis widgets that allows people to slice and dice the data in whatever way is most effective for that particular individual in problem solving. We talked earlier about the holistic data analysis that was really enabling effective teamwork. When we talk about role-based dashboards for problem solving showing the database administrator exactly what they need, we are really talking about making each team member more effective.

That’s one of the benefits of the role-based dashboards. The other thing is giving visibility all the way up to the CIO and the vice president of operations who are concerned with much different views. They want it filtered in a much different way, because they are more concerned about business performance than any individual server or resource problems that might be occurring in the environment.

Gardner: What sort of views do those business folks prefer over what the outputs of some of these monitoring tools might be?

Henning: You want to look at things from a business-service perspective, how are my critical business services performing? If I have an investment banking solution, and I’ve got a couple of other mission-critical applications that are outward facing, I want to know how those are performing right now, in terms of the critical transaction performance.

I want to be able to accommodate business data as well. So, if I see that from an IT performance level the transaction seemed to be performing well and I can see that I am also processing a consistent number of transactions that are enabling my business, I have a good view that things are going well in my operation at this point. So, it’s really a higher level view.

I am going to be much more concerned with any kind of alerts that are affecting my entire business service. If we see an alert that’s been consolidated all the way up to the investment banking business-service level, that’s going to be something that’s very important for the VP of IT operations, because he’s got a problem now that’s actually affecting his business.

Gardner: I suppose from the IT side the more that we can show and tell to the business folks about how well we are doing the better. It makes us seem less like we are firefighters and that we're proactive and on top of things. If there are any decisions several months or years out about outsourcing, we have a nice trail, a cookie-crumb trail, if you will, of how well things are going and how costs are being managed.

Henning: That’s absolutely true. I was talking to the CIO of a large university the other day. One thing that was very frustrating for him was that he was in a meeting with the president of the university, and the president was saying that it seemed like the applications were and some of the critical applications were down a lot.

This CIO was very frustrated, because he knew that wasn’t the case, but he didn’t have effective reporting tools to show that it was not the case. That was one of the things that he was very excited about, when he took a look at our product.

Gardner: We know that complexity is substantial. It’s pretty clear that that complexity is going to continue as we see organizations move toward SOA and software as a service, and hybrid issues, where a holistic business process could be supported by your systems, partner systems, or perhaps third-party systems.

I can just imagine there is going to be finger pointing when things go wrong. You’re going to want to be able to say, "Hey, not my problem, but I am ready, willing and able to help you fix it. In fact, I've got more insight into your systems than you do."

Henning: That’s absolutely the case.

Gardner: Give me a sense of where Integrien and Alive, as a product set, are going in the future, I know you can't pre-announce things, but as these new complexities in terms of permeable organizational boundaries kick in and virtualization kicks in, what might we expect in the future?

Henning: One of the things that you’re going to see from us is a comprehensive solution around the virtualized environment. Several other companies claim to have solutions in this space, but from what we have been able to see so far, the issue of motion of virtual machines (VM), moving them between different servers, is still an issue for all of these solutions.

We’re working extremely diligently to solve the issue of how to deal with performance monitoring in a virtualized environment, where you have got the individual VMs moving all over the place, based on changes in capacity, and things like that. So, look out for that solution coming from Integrien in the coming months.

Gardner: So we're talking about instances of entire stacks, provisioning and moving dynamically among systems. That sounds like a whole other level of complexity that we are adding to an already difficult situation.

Henning: Yes, it’s a big math problem. You can also compound that with the fact that when a VM moves from one physical server to another, it might be allocated a different percentage of resources. So, when you think about this whole hard-threshold based monitoring paradigm that IT is in now, what does a hard-threshold really mean in an environment like that? It makes absolutely no sense at all.

If you don’t have some way to understand the normal behavior, to provide context, and to quickly learn and adapt to changes in the environment, managing the virtualized environment is going to be an absolute nightmare. Based on spending some time with the folks over at VMware, and attending the VMWorld show this year, you could certainly see in their customers this concern about how to deal with this complex management problem.

Gardner: The old manual wetware approaches just aren’t going to cut it in that environment?

Henning: That’s correct.

Gardner: I appreciate your candor and I look forward to seeing some of these newer solutions focused on virtualization.

We have been talking about remediation and ability to get in front of problems for IT operators using predictive and analytic algorithmic approaches. To help us understand this, we have been joined by Steve Henning, the Vice President of Products at Integrien. Thank you, Steve.

Henning: Thank you very much, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to BriefingsDirect. Thanks and come back next time.

Listen to podcast here. Sponsor: Integrien.

Transcript of BriefingsDirect podcast on IT operational performance using Integrien Alive with Integrien's Steve Henning. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Thursday, January 17, 2008

Enterprises Seek Ways to Exploit Web Application Mashups and Lightweight Data Presentation Techniques

Transcript of BriefingsDirect podcast on data mashups with IBM and Kapow.

Listen to the podcast here. Sponsor: Kapow Technologies.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about the state of choice in the modern enterprise around development and deployment technologies.

These days, developers, architects and even line-of-business managers have many choices. This includes things like Web applications, software-as-a-service (SaaS), Services Oriented Architecture (SOA), RESTful applications, mashups, pure services off the Web, and pure services from within an Intranet or even the extended enterprise. We’re talking about RSS and Atom feeds, and, of course, there is a traditional .NET and Java going on.

We also see people experimenting with Ruby and a lot of use around PHP and scripting. The good news is that there are a lot of choices. The bad news is also that there are a lot of choices.

Some of these activities are taking place outside the purview of IT managers. People are being innovative and creative, which is good, but perhaps not always in the way that IT would like in terms of security and access control. These newer activities may not align with some of the larger activities that IT needs to manage -- which many times these days includes consolidation, unification, and modernization of legacy applications.

To help us weed through some of the agony and ecstasy of the choices facing application development and deployment in the enterprise, we have on the call, Rod Smith. Rod is Vice President of Internet Emerging Technologies at IBM. Welcome to the show, Rod.

Rod Smith: Thank you very much. It’s nice to be here.

Gardner: We also have Stefan Andreasen, the Founder and CTO of Kapow Technologies. Welcome to the show, Stefan.

Stefan Andreasen: Thank you.

Gardner: Let’s go first to Rod. We spoke last spring about these choices and how there are, in effect, myriad cultures that are now involved with development. In past years, development was more in a closed environment, where people were under control … white coats, raised floors, and glass rooms come to mind. But now it’s more like the Wild West. What have you been finding in the field, and do you see this as chaos or opportunity?

Smith: A little of both. In times of innovation you get some definite chaos coming through, but IT, in particular, and line of businesses see this as a big opportunity. Because of SOA and the other technologies you mentioned, information is available, and line of business is very interested in capturing new business opportunities.

Time to market is getting shorter, and getting squeezed all the time. So you’re seeing line of business and IT coming together around what they have to do to drive more innovation and move it up a couple of notches, from a business perspective.

Open standards now are very important to IT. Line of business, with mashups in particular, can use those types of services to get the information and create solutions they couldn’t do in the labs, when the propeller heads and others had to be involved five or 10 years ago.

Gardner: So we have dual or maybe multiple tracks going on. I suppose what’s missing is methodological and technical management. That’s an area where IBM has been involved for some time. Does IBM look at this as an opportunity?

Smith: A big opportunity. And you hit it on the head. The methodology here is very different from the development methodology we’ve been brought up to do. It’s much more collaborative, if you’re line of business, and it’s much more than a set of specifications.

Here is where we’re seeing people talk about building mashups. Usually they have a really good idea that comes to mind or something that they think will help with a new business opportunity.

Often the second question -- and we’ve seen a pattern with this -- is “Where is the data? How do we get to the data? Can IT open it up for us? Do line-of-business people have it in spreadsheets?” Typically, when it’s valuable to the business, they want to catalog it and put it together, so other people can share it. Finally, they do a mashup.

So methodology is one of the things we call a self-service business pattern. It starts with the idea, from a developer standpoint. "I really need to understand the business. I need to understand the time to market and the partnerships, and how information can be exposed." Then, they get down into some of the details. "I've got to do it quickly."

What we are seeing from an opportunity standpoint is that many businesses, when they see an opportunity, want a vendor to respond in 30 days or less, [and do more] within six months down the road. So that’s a challenge, and it is an opportunity. We think about tooling and middleware and services. How can we help the customer?

Gardner: Let’s go to Stefan. When you see these activities in the enterprise around mashups, SOAP, REST, HTML and XML, there’s an opportunity for bridging the chaos, but I suppose there’s also a whole new type of development around situational applications.

That is to say that, an opportunity exists to access content that hadn’t really been brought into an application development activity in the past. Can you tell us a little bit about what you’re seeing in the enterprise and how these new types of development are manifesting themselves?

Andreasen: Let me comment on the chaos thing a little bit. It’s important to understand the history here. At first, central IT worked with all their big systems. Line of business really didn’t have any access to IT or tools themselves, until recently when they got desktop tools like Excel.

This current wave is really driven by line of business getting IT in their own hands. They’ve started using it, and that’s created the chaos, but chaos is created because there is a need.

Now, with the Web 2.0 and the mashup wave, there is an acknowledgement of a big need here, as Rod also said. So it’s necessary to understand why this is happening and why it is something that’s very important.

Gardner: These end-users, power users, these line of business folks, they’ve been using whatever tools have been available to them, even if it’s an Excel spreadsheet. I suppose that gives them some productivity, but it also leaves these assets, data and content, lying around on hard drives in a fairly unmanaged perspective.

Can we knock two birds down with one stone in terms of managing this chaos in terms of the data, but also bring together some interface and application development benefits?

Andreasen: The worst thing would be to shut it down, of course. The best thing that’s happening now is acknowledging that line-of-business people need to do their own thing. We need to give them the tools, environments and infrastructure so they can do it in a controlled way -- in an acceptable, secured way -- so that your laptop with all of your customer data doesn't get stolen at the airport.

When we talk about customer data, we leap back to your earlier question about data. What are line-of-business people working with? Well, they’re working with data, analyzing data, and finding intelligence in that data, drawing conclusions out of the data, or inventing new products with the data. So the center of the universe here for this IT work is really dealing with data.

Gardner: SOA is one of the things that sits in the middle between the traditional IT approaches and IT development and then these newer activities around data, access, and UIs and using Web protocols.

I wonder if you think that that’s where these things meet. Is there a way to use an enterprise service bus (ESB) for checking in and out of provisioned or governed services? Is there a way that mashups and the ERP applications meet up?

Smith: The answer is yes. Without SOA we probably wouldn't have gotten to a place where we can think about mashable content or remixable content.

What you are seeing from customers is the need to take internal information and transform it into XML or RESTful services. There’s a good match between ESB things … [and] thinking about security and other pieces of it, and then building the Rich Internet Application (RIA) type of applications.

The part you touched on before is interesting, too. And I think Stefan would agree with me. One thing we learned as we opened up this content is that it isn't just about IT managing or controlling it. It’s really a partnership now.

One thing Stefan has with Kapow that really got us talking early was the fact that for Stefan’s content they have a freshness style. We found that same thing is very important. The line of business wants to be involved when information is available and published. That’s a very different blending of responsibility than we've seen before on this.

So thinking forward you can imagine that while you are publishing this, you might be putting it into a catalog repository or into services. But it also has to available for line of business now to be able to look at those assets and work with IT on when they should be available to business partners, customers and others.

Gardner: It’s interesting you mentioned the word "publish," and it’s almost as if we are interchanging the words "publishing" and "application development" in the sense that they are munging or overlapping.

Does that fit with what Kapow has been seeing, Stefan, that publishing and syndication are now a function of application development?

Andreasen: There are several sides to this question of which data you need, how to access it, how it is published, etc. One thing you are talking about is line of business publishing their data so other people can use it.

I split data into several groups. One is what I call core data, the data that is generally available to everybody in the company and probably sits in your big systems. It’s something everybody has. It’s probably something that's service-oriented or is going to be very soon.

Then there is the more specialized data that’s sitting out in line of business. There's a tendency now to publish those in standard formats like RSS, RESTful services, etc.

There's is a third group, which I call intelligence data. That's hard to find, but gives you that extra insight, extra intelligence, to let you draw a conclusion which is different from -- and hopefully better than -- your competitors’.

That’s data that’s probably not accessible in any standard way, but will be accessible on the Web in a browser. This is exactly what our product does. It allows you to turn any Web-based data into standard format, so you can access what I call intelligence data in a standard fashion.

Gardner: This is the type of data that had not been brought into use with applications in the past?

Andreasen: That is correct. There is a lot of information that’s out there, both on the public Web and on the private Web, which is really meant to be human-readable information. You can just think about something as simple as going to U.S. Geological Service and looking at fault lines of earthquakes and there isn't any programmatic API to access this data.

This kind of data might be very important. If I am building a factory in an earthquake area, I don’t want to buy a lot that is right on the top of a fault line. So I can turn this data into a standard API, and then use that as part of my intelligence to find the best property for my new factory.

Smith: When we talk of line of business, it’s just not internal information they want. It's external information, and we really are empowering these content developers now. The types of applications that people are putting together are much more like dashboards of information, both internally and externally over the Internet, that businesses use to really drive their business. Before, the access costs were high.

Now the access costs are continuing to drop very low, and people do say, "Let’s go ahead and publish this information, so it can be consumed and remixed by business partners and others,” rather than thinking about just a set of APIs at a low level, like we did in the past with Java.

Gardner: How do we bring these differing orbits into alignment? We've got people who are focused on content and the human knowledge dimension -- recognizing that more and more great information is being made available openly through the Web.

At the same time, we have this group that is API-minded. I guess we need to find a way of bringing an API to those folks who need that sort of interface to work with this data, but we also need for these people to take this data and make it available in such a way that a developer might agree with it or use it.

How does Kapow work between these constituencies and make one amenable to the other? We're looking for a way to bind together traditional IT development with some of these “mashupable” services, be it internal content or data or external services off of the Web.

I wonder what Kapow brings to the table in terms of helping these two different types of data and content to come together -- APIs versus content?

Andreasen: If you want to have automatic access to data or content, you need to be able to access it in a standard way. What is happening now with Web Oriented Architecture (WOA) is that we're focusing on a few standard formats like RESTful services and on feeds like RSS and Atom.

So first you need to be able to access your data that way. This is exactly what we do. Our customers turn data they work with in an application into these standard APIs and feeds, so they can work with them in an automated way.

It hadn’t been so much of a problem earlier, maybe because there wasn’t so much data, and people could basically cut and paste the data. But with the explosion of information out there, there's a realization that having the right data at the right time is getting more and more important. There is a huge need for getting access in an automated way.

How do line-of-business people work with the data? Well, they work with the data in the application interface. What if the application interface today is your browser?

Kapow allows the line-of-business people to automatically access data the way they worked with it in their Web browser.

That’s a very powerful way of accessing data, because you don't have to have an extra level of IT personnel. You don't have to first explain, "Well, this is the data I need. Go find it for me." And then, maybe you get the wrong data. Now, you are actually getting the data that you see the way you want.

Gardner: Another aspect to this is the popularity of social networking and what's known as the "wisdom of crowds" and wikis. A lot of contributions can be brought into play with this sort of gray area between content and publishing, different content feeds, and exposure and access and the traditional IT function.

Wikis have come into play, and they have quite a bit of exposure. Maybe you have a sense of how these worlds can be bridged, using some of what's been known as social networking?

Smith: Software development now is much more of a social networking activity than an engineering activity. At IBM, we have Blog and Wiki Central, where people use wikis to get their thoughts down and collectively bring an idea about.

Also at IBM, we have Innovation Jam, which we hold every year, and which brings in hundreds of thousands of people now. It used to be just IBM, but we’ve opened it up this last year to everyone, friends and family alike, to come up with ideas.

That part is great on the front end. You then can have a much better idea of what the expectations are, and what a user group wants. They're usually very motivated to stay in the loop to give you feedback as you do development.

The big part here is when it comes to doing mashups. It's the idea that you can produce something relatively quickly. With IBM’s QEDWiki, we like the idea that someone could assemble an application, wire it together in the browser, and it has the wiki characteristics. That is, it's stored on the server, it’s versioned as to enterprise characteristics, and it’s sharable.

It’s a key aspect that it has to be immediately deployable and immediately accessible by the folks that you are networking with.

That relates to what Stefan was saying and what you were asking about on how to bridge the two worlds of APIs and content. We're seeing now that as you think about the social networking side, people want the apps built into dashboards.

The more forward-thinking people in IT departments realize that the faster they can put together publishable data content, they can get a deeper understanding in a very short time about what their customers want.

They can then go back and decide the best way to open up that data. Is it through syndication feeds, XML, or programmatic API? Before, IT had to guess usage and how many folks might be touching it, and then build it once and make it scalable.

We’re doing things much more Agile-wise and building it that way, and then, as a flip, building the app that’s probably 80 percent there. Then IT can figure out how they could open up the right interfaces and content to make it available broadly.

Gardner: Stefan, could you give us some examples of user scenarios, where Kapow has been brought in and has helped mitigate some of the issues of access to content and then made it available to traditional development? Is there a way for those folks who are perhaps SOA-minded, to become a bit more open to what some people refer to as Web-Oriented Architecture?

Andreasen: One example that was mentioned in The Wall Street Journal recently in an article on mashups. It was on Audi in Germany. They are using our product to allow line of business to repurpose existing Intranets.

Let’s say that a group of people want to take what’s already there, but tweak it, combine it, and maybe expose it as a mobile application. With our tool, they can now do that in a self-service way, and then, of course, they can share that. What’s important is that they published this mini-mashup into their WebSphere portal and shared it with other people.

Some of them might just be for individual use. One important thing about a mashup is that an individual often creates it. Then it either stops there, because only that individual needs it – or it can also grow into company-wide use and eventually be taken over by central IT, as a great new way to improve performance in the entire company. So that shows one of the benefits.

Other examples have a lot to do with external data -- for example, in pricing comparisons. Let’s say I'm an online retailer and suddenly Amazon enters the market and starts taking a lot of market share, and I really don’t understand why. You can use our product to go out and harvest, let’s say, all the data from digital cameras from Amazon and from your own website.

You can quickly find out that whenever I have the lowest price, my product is out of stock -- and whenever I have a price that's too high, I don’t sell anything. Being able to constantly monitor that and optimize my prices is another example.

Another very interesting piece of information you can get is vendor pricing. You can know your own profit margin. Maybe it’s very low on Nikon cameras. You see that eBay is offering the Nikon cameras below even your cost as the vendor. You know for sure that buyers are getting a better deal with Nikon than you can offer. I call this using data to create intelligence and improve your business.

Gardner: All this real-time, updated content and data on the Web can be brought into many aspects of what enterprises do -- business processes, purchasing, evaluation, and research.

I suppose a small amount of effort from a mashup could end up saving a significant amount of money, because you’re bringing real-time information to those people making decisions.

How about you on your side, Rod? Any examples of how these two worlds -- the peanut butter and chocolate, if you will -- come together for a little better snack?

Smith: I’ll give you a good one. It’s an interesting one we did as a technology preview with Reuters and AccuWeather. Think about this again from the business perspective, where two business folks met at a conference and chit-chatted a bit.

AccuWeather was talking about how they offer different types of services, and the Reuters CTO said, "You know, we have this commodity-shipping dashboard, and folks can watch the cargo go from one place to another. It’s odd that we don’t have any weather information in there.” And the question came up very quickly: "I wonder how hard it would be to mash in some weather information."

We took one of their folks, one of mine, and the person from AccuWeather. They sat down over about three or four hours, figured out the scenario that Reuters was interested in and where the data came from, and they put it together. It took them about two weeks, but altogether 17 hours -- and that’s over a beer.

So it was chocolate and nuts and beer. I was in pretty good shape at that point. The interesting thing came after that. When we showed it to Reuters, they were very thrilled with the idea that you have that re-mixibility of content. They said that weather probably would be interesting, but piracy is a lot more interesting. "And, by the way" -- and this is from the line of business person -- "I know where to get that information."

Gardner: Now when you say "piracy," you mean the high seas and the Jolly Roger flying up on the mast -- that kind of thing?

Smith: That’s it. I didn’t even know it existed anymore. In 2006, there were 6,000 piracy events.

Gardner: Hijackings at sea?

Smith: Yes.

Gardner: Wow!

Smith: I had no idea. It turned out that the information was a syndication feed. So we pulled it in and could put it on a map, so you could look at the different events.

It took about two hours, but that’s that kind of dynamic now. The line-of-business person says, "Boy, if that only took you that much time, then I have a lot of ideas, which I’ve really not talked about before. I always knew that if I mentioned one more feature or function, IT would tell me, it takes six more months to do."

We've seen a huge flip now. Work is commensurate with some results that come quickly. Now we will see more collaboration coming from IT on information and partnerships.

Gardner: This networking-collaboration or social interaction is really what’s crafting the new level of requirements. Instead of getting in line behind 18 six-month projects, 12 to 20 hours can be devoted by people who are perhaps on the periphery of IT.

They're still under the auspices of what’s condoned under IT and make these mashups happen, so that it’s users close to the issues, close to where the creativity can begin that create a requirement, and then binds these two worlds together.

Smith: That’s correct, and what is interesting about it is, if you think about what I just described -- where we mashed in some data with AccuWeather -- if that had been an old SOA project of nine or 18 months, that would have been a significant investment for us, and would have been hard to justify.

Now, if that takes a couple of weeks and hours to do -- even if it fails or doesn’t hit the right spot -- it was a great tool for learning what the other requirements were, and other things that we try as a business.

That’s what a lot of this Web 2.0 and mashups are about -- new avenues for communication, where you can be engaged and you can look at information and how you can put things together. And it has the right costs associated with it -- inexpensive.

If I were going to sum up a lot of Web 2.0 and mashups, the magnitude of drop in “customization cost” is phenomenal.

Gardner: And that spells high return on value, right?

Smith: That’s right.

Gardner: How do you see this panning out in the future? Let’s look in our crystal ball. How do you see this ability of taking intelligence, as you call it, around the content, and then the line-of-business people coming in and making decisions about requirements, and how they want to tune or see the freshness of the content?

What’s going to happen in two or three years, now that we are bringing these things together?

Andreasen: There will be a lot more of what Rod just described. What Rod just mentioned is an early move, and a lot of companies aren't even thinking along these lines yet. Over the next one or two years, more people will realize the opportunity and the possibility here, and start doing it more. Eventually, it’s going to explode.

People will realize that getting the right data and the right content at the right time, and using that to create more intelligence is one thing. The other thing they’ll realize is that by networking with peers and colleagues, they'll get ideas and references to new data. All of these aspects -- the social aspects, the data aspect and the mashup aspect -- will be much more realized. I think it’s going to explode in usage.

Gardner: Any last thoughts, Rod, from where you see these things going?

Smith: Well, as we see in other technologies moving through from an SOA perspective, this is a great deal about cultural change within companies, and the technology barriers are coming down dramatically.

You don’t have to be a Java expert or a C# expert. I'm scary enough to be able to put together or find solutions for my own needs. It’s creating a way that line-of-business people are empowered and they can see business results quickly.

That also helps IT, because if the line of business is happy, then IT can justify the necessary middleware. That’s a fundamental shift. It's no longer an IT world, where they can promise a solution to the line of business 12 to 18 months down the road.

It’s much more of, "Show me something quickly. When I’ve got the results in my hand -- the dashboard -- then you can explain what I need to do for IT investments and other things."

It’s more collaboration at that point, and makes a lot of sense on governance, security, and other things. I can see the value of my app, and I can actually start using that to bring value to my company.

Gardner: I suppose another important aspect culturally is that part of SOA’s value is around reuse. These mashups and using this content in association with other different activities, in a sense promotes the notion of reuse.

You're thinking about, "How can I reuse this mashup? How can I extend this content, either off the Web or internally, into new activities?" That, in a sense, greases the skids toward more SOA, which I think is probably where IT is going to be heading anyway.

Smith: Well, what’s fun about this, and I think Stefan will agree, is that when I go to a customer, I don’t take PowerPoint charts anymore. I look on their website and I see if they have some syndication feeds or some REST interfaces or something.

Then I look around and I see if I can create a mashup of their material with other material that hadn’t been built with before. That’s compelling.

People look and they start to get excited because, as you just said, they see business patterns in that. "If you could do that, could you grab this other information from so-and-so?"

It’s almost like a jam session at that point, where people come up with ideas. That’s where we will see more of these examples. Actually, a lot of our stuff is on YouTube, where we had a retail store that wanted to see their stores on Google Maps and then see the weather, because weather is an important factor in terms of their businesses.

In fact, it’s one of the most important factors. What we didn’t realize is that very simple pattern -- from a technology standpoint it didn’t take much -- held up over and over again. If it wasn’t a store, it was banking location. If it wasn’t banking locations, it was ships. There were combinations in here that you could talk to your businessperson about.

Then you could say to the technologist or a developer, "What do I have to do to help them achieve that?" They don’t have to learn XML, Web objects, or anything else, because you have these SOA interfaces. It helps IT expand that whole nature of SOA into their enterprise.

Andreasen: One thing that's going to happen is that line-of-business people are getting a lot of great ideas. If I am working with business problems, I constantly get ideas about how to solve things. Usually, I just brush it away and say, "Well, it will be cool to have this, but it’s impossible."

They just don’t understand that the time from idea to implementation is dramatically going to go down. When they start realizing this, there is hidden potential out on the edge of the business that will now be cut loose and create a lot of value. It’s going to be extremely interesting to see.

Smith: One of the insights we have from customers is that mashups and this type of technology help them to visualize their SOA investments. You can’t see middleware. Your IT shop tells you what’s good, they tell you they get flexibility, but they want to be shown results -- and mashups help do that.

The second part is people say it completes the "last mile" for SOA. It starts to make a lot of sense for your IT shop to be able to show how the middleware can be used in ways it wasn’t necessarily planned for.

The big comment we hear is, "I want my content to be mashable or re-mixable." We figured out that it’s very much a SOA value. They want things to be used in ways they weren't planned for originally. Show me that aggressive new business opportunity, and you make me a very happy person.

Andreasen: Probably one thing we will see in companies is some resistance from the technologists, from central IT, because they are afraid they will lose control. They are afraid of the security issues etc., but it will probably be like what we've seen with company wikis.

They're coming in the back door in line of business and eventually the companies buy the company-wide wiki. I think we'll see the same thing with mashups. It will be starting out in line of business, and eventually the whole company understands, "Well, we have to have infrastructure that solves this problem in a controlled way."

Some companies have very strict policy today. They don’t even allow their line-of-business pros to write macros in Excel. Those companies are probably the ones that will be the last ones discovering the huge potential in mashups.

I really hope they also start opening their eyes that there are other roles for IT, rather than just the big, central system that run your business.

Gardner: Well, great -- thanks very much for your insights. This has really helped me understand better how these things relate and really what the payoff is. It sounds compelling from the examples that you provided.

To help us understand how enterprises are using Web applications, mashups, and lightweight data presentation, we’ve been chatting today with Rod Smith, Vice President of Internet Emerging Technologies at IBM. I really appreciate your time, Rod.

Smith: Thank you.

Gardner: And Stefan Andreasen, the Founder and CTO of Kapow Technologies. Thanks for joining, Stefan.

Andreasen: It’s been a pleasure, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions, and you've been listening to a BriefingsDirect. Thanks for listening and come back next time.

Listen to the podcast here. Sponsor: Kapow Technologies.

Transcript of BriefingsDirect podcast on data mashups with IBM and Kapow. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Wednesday, December 19, 2007

Holiday Peak Season Hits for Retailers Alibris and QVC -- A Logistics and Shipping Carol

Transcript of BriefingsDirect podcast on peak season shipping efficiencies and UPS retail solutions with Alibris and QVC.

Listen to the podcast here. Sponsor: UPS.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions and you're listening to BriefingsDirect.

Today, a sponsored podcast discussion about the peak holiday season for retail shopping -- online and via television -- and the impact that this large bump in the road has logistically and technically for some major retailers.

We’re going to discuss how Alibris, an online media and bookseller, as well as QVC, a global multimedia shopping network, handles this peak demand issue. The peak is culminating for such shippers as UPS this week, right around Dec. 19, 2007.

We’re going to talk about how the end-user in this era of higher expectations is now accustomed to making a phone call or going online to tap in a few keystrokes, and then -- like Santa himself -- having a package show up within a day or two. It's instant gratification, if you will, from the logistics point-of-view.

Helping us understand how this modern miracle can be accomplished at such high scale and with such a huge amount of additional capacity required during the November and December shopping period, we’re joined by two guests. We’re going to be talking with Mark Nason, vice president of operations at Alibris, and also Andy Quay, vice president of outbound transportation at QVC. I want to welcome you both to the show.

Mark Nason: Thank you, Dana.

Gardner: Tell us a little bit about what’s different now for Alibris, given the peak season demands, over just a few years ago. Have the expectations of the end-user really evolved, and how do you maintain that sort of instant gratification despite the level of complexity required?

Nason: What we strive for is a consistent customer experience. Through the online order process, shoppers have come to expect a routine that is reliable, accurate, timely, and customer-centric. For us to do that internally it means that we prepare for this season throughout the year. The same challenges that we have are just intensified during this holiday time-period.

Gardner: For those who might not be familiar, tell us a little about Alibris. You sell books, used books, out-of-print books, rare media and other media -- and not just directly, but through an online network of independent booksellers and retailers. Tell us more about how that works.

Nason: Alibris has books you thought you would never find. These are books, music, movies, things in the secondary market with much more variety, and that aren’t necessarily found in your local new bookseller or local media store.

We aggregate -- through the use of technology -- the selection of thousands of sellers worldwide. That allows sellers to list things and standardize what they have in their store through the use of a central catalogue, and allows customers to find what they're looking for when it comes to a book or title on some subject that isn’t readily available through their local new books store or media seller.

Gardner: Now, this is a very substantial undertaking. We're talking about something on the order of 70 million books from a network of some 10,000 booksellers in 65 or more countries. Is that right?

Nason: Roughly, that’s correct. Going in and out of the network at any given time, we've got thousands of sellers with literally millions of book and other media titles. These need to be updated, not only when they are sold or added, but also when they are priced. Prices are constantly changing. It’s a very dynamic market.

Gardner: What is the difference in terms of the volume that you manage from your slowest time of the year compared to this peak holiday period, from mid-November through December?

Nason: It’s roughly 100 percent.

Gardner: Wow!

Nason: In this industry there are actually two peak time periods. We experience this during the back-to-school season that occurs both in January and the latter-half of August and into September.

Gardner: So at the end of the calendar year you deal with the holidays, but also for those college students who are entering into their second semester?

Nason: Exactly. Our peak season associated with the holidays in December extends well into January and even the first week of February.

Gardner: Given this network and the scale and volume and the number of different players, how do you manage a consistent response to your customers, even with a 100 percent increase at the peak season?

Nason: Well, you hit on the term we use a lot -- and that is "managing" the complexity of the arrangement. We have to be sure there is bandwidth available. It’s not just staffing and workstations per se. The technology behind it has to handle the workload on the website, and through to our service partners, which we call our B2B partners. Their volume increases as well.

So all the file sizes, if you will, during the transfer processes are larger, and there is just more for everybody to do. That bandwidth has to be available, and it has to be fully functional at the smaller size, in order for it to function in its larger form.

Gardner: I assume this isn’t something you can do entirely on your own, that you depend on partners, some of those B2B folks you mentioned. Tell us a little bit about some of the major ones, and how they help you ramp up.

Nason: In the area of fulfillment, we rely heavily on our third-party logistics partners, which include carriers. At our distribution centers, typically we lease space, equipment, and the labor required to keep up with the volume.

Then with our B2B partners -- those are the folks that buy from us on a wholesale or distribution basis -- we work out with them ahead of time what their volume estimates might be and what their demands on us would be. Then we work on scheduling when those files might come through, so we can be proactive in fulfilling those orders.

Gardner: When it comes to the actual delivery of the package, tell us how that works and how you manage that complexity and/or scale.

Nason: Well, we have a benefit in that we are in locations that have scalable capacity available from the carriers. That includes lift capacity at the airport, trucking capacity for the highway, and, of course, railheads. These are all issues we are sensitive to, when it comes to informing our carriers and other suppliers that we rely on, by giving them estimates of what we expect our volume to be. It gives them the lead time they need to have capacity there for us.

Gardner: I suppose communication is essential. Is there a higher level of integration handoff between your systems and their systems? Is this entering a more automated level?

Nason: It is, year-round. For peak season it doesn’t necessarily change in that form. The process remains. However, we may have multiple pick-ups scheduled throughout the day from our primary carriers, and/or we arrange special holiday calendar scheduling with those carriers for pick-up, perhaps on a Saturday, or twice on Mondays. If they are sensitive to weather or traffic delays, for example, we know the terminals they need to go through.

Gardner: How about returns? Is that something that you work with these carriers on as well? Or is that something you handle separately?

Nason: Returns are a fundamental part of our business. In fact, we do our best to give the customer the confidence of knowing that by purchasing in the secondary market, the transaction is indemnified, and returns are a definite part of our business on a day-to-day basis.

Gardner: What can we expect in the future? Obviously this volume continues, the expectations rise, and people are doing more types of things online. I suppose college students have been brought up with this, rather than it being something they have learned. It’s something that has always been there.

Do you see any prospects in the future for a higher level of technology need or collaboration need, how can we scale even further?

Nason: Constantly, the improvements in technology challenge the process, and managing the complexity is what you weigh against streamlining even further what we have available -- in particular, optimizing inter-modal transport. For example, with fuel costs skyrocketing, and the cost of everyone's time going up, through the use of technology we look for opportunities on back-haul lanes, or in getting partial loads filled before they move, without sacrificing the service interval.

These are the kinds of things that technology allows when it's managed properly. Of course, another layer of technology has to be considered from the complexity standpoint before you can be successful with it.

Gardner: Is there anything in the future you would like to see from such carriers as UPS, as they try to become your top partners on all of this?

Nason: Integration is the key, and by that I mean the features of service that they provide. It’s not simply transportation, it’s the trackability, it’s scaling; both on the volume side, but also in allowing us to give the customer information about the order, when it will be there, or any exceptions. They're an extension of Alibris in terms of what the customer sees for the end-to-end transaction.

Gardner: Fine, thanks. Now we’re going to talk with Andy Quay, the vice president of outbound transportation at QVC.

QVC has been having a very busy holiday peak season this year. And QVC, of course, has had an illustrious long-term play in pioneering, both retail through television and cable, as well as online.

Welcome Andy, and tell us a little bit about QVC and your story. How long you have been there?

Andy Quay: Well, I am celebrating my 21st anniversary this December. So I can say I have been through every peak season.

Although peak season 20 some years ago was nothing compared to what we are dealing with now. This has been an evolutionary process as our business has grown and become accepted by consumers across the country. More recently we’ve been able to develop with our website as well, which really augments our live television shows.

Gardner: Give us a sense of the numbers here. After 21 years this is quite a different ball game than when you started. What sort of volumes and what sort of records, if any, are we dealing with this year?

Quay: Well, I can tell you that in our first year in business, in December, 1986 -- and I still have the actual report, believe it or not -- we shipped 14,600 some-odd packages. We are currently shipping probably 350,000 to 450,000 packages a day at this point.

We've come a long way. We actually set a record this year by taking more than 870,000 orders in a 24-hour period on Nov. 11. This led to our typical busy season through the Thanksgiving holiday to the December Christmas season. We'll be shipping right up to Friday, Dec. 21 for delivery on Christmas.

Gardner: At QVC you sell a tremendous diversity of goods. Many of them you procure and deal with the supply chain yourselves, therefore cutting costs and offering quicker turnaround processing.

Tell us a little about the technology that goes into that, and perhaps also a little bit about what the expectations are now. Since people are used to clicking a button on their keyboard or making a quick phone call and then ... wow, a day or two later, the package arrives. Their expectations are pretty high.

Quay: That’s an excellent point. We’ve been seeing customer expectations get higher every year. More people are becoming familiar with this form of ordering, whether through the web or over the telephone.

I’ll also touch on the technology very briefly. We use an automated ordering system with voice response units that enable my wife, for example, to place an order in about 35 seconds. So that enables us to handle high volumes of orders. Using that technology has allowed us to take some 870,000 orders in a day.

The planning for this allows the supply chain to be very quick. We are like television broadcasts. We literally are scripting the show 24-hours in advance. So we can be very opportunistic. If we have a hot product, we can get it on the air very quickly and not have to worry about necessarily supplying 300 brick-and-mortar stores. Our turnaround time can be blindingly quick, depending upon how fast we can get the inventory into one of our distribution centers.

We currently have five distribution centers, and they are all along the East Coast of the U.S., and they are predominantly commodity driven. For example, we have specific commodities such as jewelry in one facility, and we have apparel and accessories as categories of goods in another facility. That lends itself to a challenge when people are ordering multiple items across commodities. We end up having to ship them separately. That’s a dilemma we have been struggling with as customers do more multi-category orders.

As I mentioned, the scripting of the SKUs for the broadcast is typically 24 hours prior, with the exception of Today's Special Value (TSV) show and other specific shows. We spend a great deal of time forecasting for the phone centers and the distribution carriers to ensure that we can take the orders in volume and ship them within 48 hours.

We are constantly focused on our cycle-time and in trying to turn those orders around and get them out the door as quickly as possible. To support this effort we probably have one of the largest "zone-jumping" operations in the country.

Gardner: And what does "zone-jumping" mean?

Quay: Zone jumping allows me to contract with truckload carriers to deliver our packages into the UPS network. We go to 14 different hubs across the country, in many cases using team drivers. This enables us to speed the delivery to the customer, and we’re constantly focused on the customer.

Gardner: And this must require quite a bit of integration, or at least interoperability in communications between your systems and UPS’s systems?

Quay: Absolutely, and we carefully plan leading up to the peak season we're in now. We literally begin planning this in June for what takes place during the holidays -- right up to Christmas Day.

We work very closely with UPS and their network planners, both ground and air, to ensure cost-efficient delivery to the customer. We actually sort packages for air shipments, during critical business periods, to optimize the UPS network.

Gardner: It really sounds like a just-in-time supply chain for retail.

Quay: It's as close as you can get it. As I sometimes say, it's "just-out-of-time"! We do certainly try for a quick turnaround.

Coming back to what you said earlier, as far as the competition goes it is getting more intense. The customer expectations are getting higher and higher. And, of course, we are trying to stay ahead of the curve.

Gardner: What's the difference between your peak season now and the more regular baseline of volume of business? How much increase do you have to deal with during this period, between late-November and mid- to late-December?

Quay: Well, it ramps up considerably. We can go from a 150,000 to 200,000 orders a day, to literally over 400,000 to 500,000 orders a day.

Gardner: So double, maybe triple, the volume?

Quay: Right. The other challenge I mentioned, the commodity-basis distribution that we operate on -- along with the volatility of our orders -- this all tends to focus on a single distribution center. We spend an inordinate amount of time trying to forecast volume, both for staffing and also planning with our carriers like UPS.

We want to know what buying is going to be shipping, at what distribution center, on what day. And that only compresses even more around the holiday period. We have specific cutoff times that the distribution center operations must hit in order to meet the customers' delivery date. We work very closely on when we dispatch trucks ... all of this leading up to our holiday cutoff sequence this week.

We try to maximize ground service versus the more expensive airfreight. I think we have done a very good job at penetrating UPS’s network to maximize ground delivery, all in an effort to keep the shipping and handling cost to the customers as low as possible.

Gardner: How about the future? Is this trend of that past 21 years sustainable? How far can we go?

Quay: I believe it is sustainable. Our web business is booming, with very high growth every year. And that really augments the television broadcast. We have, honestly, a fair amount of penetration, and we can still obtain more with our audiences.

Our cable broadcast is in 90 million-plus homes that actually receive our signal, but a relatively small portion actually purchase. So that’s my point. We have a long way to go to further penetrate and earn more customers. We have to get people to try us.

Gardner: And, of course, people are now also finding goods via Web search. For example, when they go to search for a piece of apparel, or a retail item, or some kind or a gift -- they might just go to, say, Google or Yahoo! or MSN, and type something in and end up on your web site. That gives you a whole new level of potential volume.

Quay: Well, it does, and we also make the website very well known. I am looking at our television show right now and we’ve have our www.qvc.com site advertised right on it. That provides an extended search capability. People are trying to do more shopping on the web, in addition to watching the television.

Gardner: We have synergies on the distribution side; we have synergies on the acquisition, and of using information and how to engage with partners. And so the technology is really in the middle of it all. And you also expect a tremendous amount of growth still to come.

Quay: Yes, absolutely. And it’s amazing, the different functions within QVC, the synergies that we work together internally. That goes from our merchandising to where we are sourcing product.

You mentioned supply chains, and the visibility of getting into the distribution center. Our merchants and programmers watch that like a hawk so they can script new items on the air. We have pre-scripted hours that we’re definitely looking to get certain products on.

The planning for the television broadcast is something that drives the back end of the supply chain. The coordination with our distribution centers -- as far as getting the operation forecast, staffed and fulfilled through shipping to our customers -- is outstanding.

Gardner: Well, it’s very impressive, given what you’ve done and all of these different plates that you need to keep spinning in the air -- while also keeping them coordinated. I really appreciate the daunting task, and that you have been able to reach this high level of efficiency.

Quay: Oh, we are not perfect yet. We are still working very hard to improve our service. It never slows down.

Gardner: Great. Thanks very much for your input. I have learned a bit more about this whole peak season, what really goes on behind the scenes at both QVC and Alibris. It seems like quite an accomplishment what you all are able to do at both organizations.

Nason: Well, thank you, Dana. Thanks for taking the time to hear about the Alibris story.

Gardner: Sure. This is Dana Gardner, principal analyst at Interarbor Solutions. We have been talking with Mark Nason, the vice president of operations at Alibris, about managing the peak season demand, and the logistics and technology required for a seamless customer experience.

We’ve also been joined by Andy Quay, vice president of outbound transportation, at the QVC shopping network.

Thanks to our listeners for joining on this BriefingsDirect sponsored podcast. Come back and listen again next time.

Listen to the podcast here. Sponsor: UPS.

Transcript of BriefingsDirect podcast on peak season shipping efficiencies and UPS retail solutions. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.