Edited transcript of weekly BriefingsDirect[TM] SOA Insights Edition podcast, recorded
Listen to the podcast here. If you'd like to learn more about BriefingsDirect B2B informational podcasts, or to become a sponsor of this or other B2B podcasts, contact Interarbor Solutions at 603-528-2435.
Dana Gardner: Hello, and welcome to the latest BriefingsDirect SOA Insights Edition, Volume 21, a weekly discussion and dissection of Services Oriented Architecture (SOA) related news and events with a panel of industry analysts and guests.
Tony Baer: Hey, Dana, how are you doing?
Jim Kobielus: Good morning, one and all.
Brad Shimmin: Thanks for having me back, Dana.
Todd Biske: Thanks, Dana. Glad to be here.
Jim Ricotta: Glad to be here.
Ricotta: That’s about right, Dana. We’ve been part of IBM for about 18 months. We were acquired toward the end of 2005. Before that, I was the CEO of DataPower for the previous three years starting in 2003. Prior to that, I ran the content networking division of Cisco Systems. So, I went from Layer 4 through Layer 7 of networking to this middleware appliance concept, and now I find myself on the other end of the fence in the world’s biggest middleware business, which is IBM.
Ricotta: IBM acquired DataPower for the current products, but really more for the potential. IBM sees a lot of potential to take appropriate functions, "appliance-ize" them, and deliver a lot more value to clients that way.
I know we're going to talk about this in the discussion, but the basic concept of an appliance is to allow customers to get their projects going more quickly, experience lower total cost of ownership (TCO), etc. My role is the general manager and VP of appliances, not just WebSphere DataPower SOA appliances. We have a broader remit and we are looking at a number of different appliance efforts for different parts of the IBM product set.
Ricotta: One of the reasons that SOA has been a very fertile ground for appliances is the standards -- the idea of standards and the idea of a layered architectural approach. Thinking of my background, if you look at networking products, what really made routers and other types of networking such big horizontal businesses was that there were standards. The first routers were software products that ran on Unix Boxes.
But as you got standard protocols and the ISO stack took hold, it became possible to build a device that you didn’t have to program or patch. You just turned it on, configured it, it did its function, and that allowed that business to really grow.
SOA has its own version of an ISO stack with the WS-Standards and the layers from things like BPEL, all the way down to XML and the basics. That’s what enabled this approach of putting together a device that supports a bunch of these standards and can fit right into anybody’s SOA architecture, no matter what they are doing with SOA.
Ricotta: At IBM, we see ESB as a key part of any SOA architecture and deployment. If you do it properly, and we can talk later about what it means to do an appliance well, you tend to get a performance solution. You’ve done optimization. You’ve done a pruning back of all the potential functions.
So, the ones that you have, you tend to have good performance from, as well as the other benefits I pointed to, easy deployment and low TCO. So, given that ESB is the core of SOA, in many ways having an appliance alternative is important.
Kobielus: Okay. Thank you, Jim, and thank you, Dana. The notion of appliances in the industry has been expanded and stretched almost to the breaking point over the last few years.
I agree with Jim on what he's saying in terms of some of the core features of any so-called appliance -- quick deployment, low TCO, a basic function-limited component of some sort that is fairly easy to slot into your existing architecture and be deployed because it incorporates open standards and all that. But, the notion of an appliance comes out of the hardware world.
That’s no problem for IBM/DataPower, because from the get-go your appliances have been hardware based -- circuit boards and other devices that could be merged into racks and so forth. In recent years, the term "appliances" has been stretched to the point where now there is something called a "software appliance," or a concept of a software appliance, that many vendors are starting to tout in their products -- and not just individual vendors, but in collaborations.
In fact, just this very week, actually it was a couple of weeks ago, I ran across a couple of additional new mentions of software appliances or when Sybase and Red Hat announced that they're working together on a so-called software appliance that’s just a bundling and integration of two software products: Sybase’s database in business intelligence (BI) products and the Red Hat Linux operating system. Ingres about five months ago announced that it has a software appliance-product family called Icebreaker.
Some BI vendors, like JasperSoft, have been saying, “Hey, we're going to integrate our product with that so-called software appliance and voila! Here's something that you can install quickly at low TCO, etc.”
What I'm getting at is what they are now calling a software appliance is no different than what has been traditionally simply been called a solution, or simply a software package, that integrates two or more disparate components into a single component -- a single package with a single install.
I'm trying in my small way to beat the drum that the industry needs to scale the definition of an appliance back to its traditional scope. It’s a hardware-centric performance-built component, because, at some point, if everything is a software appliance, then the very term "appliance" is redundant.
Ricotta: We've got to be careful, Dana. When I talked about routers, what I meant to say was that they were software products and then they became appliances. When they became appliances, they ceased to be software plus hardware. They were one thing. We see that in our industry all the time. It’s good to be at the beginning of a trend, but then, if your trend becomes too popular, everyone wants to jump on the bandwagon and the message can get diluted.
In fact, some of you who we talked to years ago when we were DataPower, might recall that, for a while, we stopped using "appliance." We started using the term "network device," because everyone saw what we were doing. Even though all they had was a Dell server with a CD with preinstalled Linux and their app, they would put a badge on the front say it’s an appliance.
I agree. You've got to be careful, because, again, there’s usually a performance value, although not always. Think about your TiVo or your iPod. That’s not a high performance-value proposition, but you always have to have a consumability value proposition and a low cost-of-ownership value proposition.
Our customers say, “Geez. We could do what your box does with software running on a server, but the operations folks tell us it would be two times or four times more expensive to maintain, because we have to patch all the different things that are on there. It’s not the same everywhere in the world in our infrastructure. Whereas with your box, we configure it; we load a firmware image, and it’s always the same wherever it exists.” Again, from my experience, that’s the way people treat routers.
So, our view is an appliance is three things that the customer buys at the same time: They buy hardware, software, and support, and it’s all together. That’s really what we think is the core value proposition. It’s cool to make a VMware image with your stuff that someone can easily deploy, but that’s something different. That's a solution, an application, a bundle, or something.
Kobielus: I think that the three core definitions of an appliance should be, "It is tangible." That’s something that you can actually throw against the wall if it screws up. Next, "Is it simple?" Now, Dana, "warehousing appliance" is not an appliance. It’s like saying that my Toyota Camry is an appliance. It’s the assemblage of many components, each of which can individually screw up. Then, thirdly, it should be pain free -- no setup and no administration or very little.
Ricotta: No, we don’t. Again, you have to put that on a server somewhere and it doesn’t have the properties that an appliance has.
Ricotta: Yeah. I read a good article about the history of the networking business, and it talked about this transition I just described, where routing software moved into these boxes and then became very, very popular. This article noted that some of the early networking companies -- Cisco, Nortel, and others -- found that if you took software, locked it down, and put it in a box that had a fan and got warm, people had an affinity. IT people have an affinity for things that you plug-in, have a fan, get warm and do something useful.
Biske: No. Actually, I’ve got a lot of background in working with appliances. When I was an enterprise architect back at an enterprise, or actual Fortune company, we had this natural convergence that was always occurring between the group responsible for our middleware, or our software infrastructure, with the network engineering team. You can look at something like an HTTP proxy, and you’ve got Apache as a software-based solution, but then there is also a whole variety of appliances that can do the same thing.
So, there is always this natural tension of smart network devices versus some of the software products that were involved. The key thing for me that hasn’t been mentioned yet is that it does have to be more than just commodity hardware with some preconfigured software put on it.
Marketers for companies looking at leveraging either VMware machines and things that are preconfigured are looking for a term for this. "Appliance" does fit, because it gives you the right conceptual model.
A manager I worked for had the term "Dial-tone Infrastructure." You want to plug it in, pick it up, and it works. That’s the model that everybody is trying to get to with their solutions. But, when you're dealing with an appliance, you have to have that level of integration between the hardware and the software, so that you're getting the absolute best you can out of the underlying physical infrastructure that you have it on.
Any software-based approach that’s on a commodity hardware is not going to be optimized to the extent that it can be. You look at where you can leverage hardware appropriately and tune this thing to get every last ounce of performance out of it that you can.
Biske: Absolutely. You always have to look at where you want to leverage it. Another example where the technology could be applied would be in the use of blade servers. The biggest knock that I see from software guys on appliances is that it’s this gateway model. You’ve got to figure out the appropriate choke point at which to have it. If you adopt a blade server architecture, now you’ve got this backplane that's the perfect gateway for a lot of these hardware-based capabilities.
The ability to leverage some of these appliance technologies and hardware-optimized solutions in a blade center approach has a lot of potential as well. Then, you’ve naturally got that choke point, and you don’t have to figure out, "Well, because I’ve got datacenters all over the place, I really need hundreds of these appliances, rather than just two or three, because of how I’ve designed my middleware distribution."
Ricotta: That’s a great point, Todd. I'm not here to introduce products on this call, lest I run afoul of all of IBM’s attorneys, but we are looking at different form factors, like blades, as a good way to expand the appliance portfolio.
Shimmin: Absolutely. When I look at this, I see two camps. You’ve got the hardware manufactures and then the software manufacturers in the SOA space, both seeing the benefits we’ve all been talking about thus far in terms of TCO, ease of use, and simplicity. Back to what Todd was saying, the key differentiator we’ve been talking about this far is the performance; the speed at which these things run, and their abilities based on that.
When you look historically at appliances like SSL accelerators, the reason they're not sitting on servers today is because servers can’t keep up with that wire speed you need. If I look at something like Layer 7 Technologies, they have their XML accelerators, and I see that as a perfect way to utilize a piece of hardware to run something that needs to go fast. I look at companies like
I see the two sides converging, but at the same time, I see there being something very valid about a piece of software that acts like an appliance. Layer 7, for example, released what they called a "virtual soft-appliance." If it quacks like a duck, and walks like a duck, it is a duck, right?
But the difference is, it’s just not going to go as fast as it was going to go on the Layer 7 device. If you and your enterprise are going to get all the advantages from a piece of software that you are going to get from a single piece of server hardware, you don’t need the performance from it. I don’t see that as being a problem and something we should try to shun.
Ricotta: The idea with an appliance is that the clients don’t care what’s inside. They care about the functions that the device does. The way we have architected our product, we do have lots of choices. We can pick the right processors and, even before we became part of IBM, we had used some ASICS to speed up certain parts of the XML processing pipeline.
Now, we are doing much more of that, we’ve got some new projects kicked off, because IBM has a lot of various state-of-the-art custom silicon and ASIC technology. So, yes, we will continue to leverage whatever hardware constructs give us the qualities we need, from performance, cost, and reliability, and we will continue to shield the IT users from that, because they don’t want to see it really.
Ricotta: We'll be active in both. You'll hear from us, later this year, as well as next year.
Kobielus: They dovetail, because the very concept of an appliance is something that’s loosely coupled. It’s a basic, discrete component of functionality that is loosely coupled from other components. You can swap it out independently from other components in your architecture, and independently scale it up or scale it out, as your traffic volume grows, and as your needs grow. So, once again, an appliance is a tangible service.
Shimmin: I see it similarly, in that an appliance can act as an enabler for other pieces of software in terms of providing that level of performance and scalability that those pieces can't do on their own. Such as we are seeing with ESBs and other areas. Those pieces of software need desperately some piece of hardware somewhere that can get them the information need in any timely manner.
Gardner: Do you think there is a parallel here between what we’ve seen on the World Wide Web in terms of content delivery networks and application management and acceleration -- what the enterprises are going to want to do internally, and not just enterprises, but also service providers, those who are going to be doing software-as-a-service (SaaS) and co-location activities, similar top what we’ve seen from Amazon and others.
I'll throw this back to Jim Ricotta. Is there a bit more than what we are discussing in terms of the role here?
Ricotta: We definitely see some parallels to what went on with the Web and CDNs. We have some discussions underway with network providers that have big corporate clients who are now launching their first B2B Web services, and they are basically utilizing SOA-type functions between organizations across Wide Area Networks. These carriers are looking at how to provide a value-added service, a value-added network to this growing volume of XML, SOA-type traffic. We see that as a trend in the next couple of years.
Baer: I have a very short observation, which is that history tends to go in cycles, and I imagine or recall similar discussions with the CAD/CAM vendors back in the 1980s with all their turnkey systems.
Baer: Exactly. So, appliances are not new in this space. There’s always been a need to do optimized processing. We've just taken a detour during the era of open systems, but now we can start the approach again without the religious wars that we fought about 10 or 15 years ago.
Ricotta: Let me make one comment also. I’ve heard a lot about performance in appliances, and I want to implore you all to think more, and maybe talk to someone like Todd, who has done the ROI, the evaluations, and all that kind of thing. It’s really much more. In fact, when I talked to our customers, it’s about TCO and then "time to solution" and "time to deployment." And, then performance.
Ricotta: I can give you some data points that I collected. I’ve heard big global IT organizations, when they do their TCO calculation, say a router is $100 a month to support, a server is $500, and a DataPower SOA appliance is maybe $200 to $250. Those are the kind of ranges I hear.
Biske: Something that hasn’t been brought up, and I think it’s something organizations have to consider, when they look at appliances versus software-based solutions, is the operational model. A lot of this space in the middle in SOA is all about what I would call a "configure-not-code" approach. Appliances, by definition, are something you configure, not something that you are going to be developing code for. So, it’s really tuned for an operational model, and not for a developer having to go in and tinker around with it.
A lot of the vendors claiming to produce software appliances are now trying to move closer to that. There’s still a big conceptual difference there and that’s really where a lot of the savings can come in the total cost of ownership. It isn’t how much work you have to go through it to actually make a change to the policies that are being enforced by this software appliance or device, and there are big differences between the products out there.
Biske: Absolutely, but the key to it all that Jim mentioned earlier on is standards. You don’t have much of a market for devices in the space, unless you’ve got the standards.
On BPEL4People and WS-Human Task ...
Baer: It’s interesting that they made both spec proposals separate. But, it’s not any type of surprise. IBM and SAP have been talking about this for about 18 months to two years, if I recall. What was a little interesting was that Oracle originally dissented from this, and now Oracle is part of that team.
Essentially what the hubbub is all about is that all the SOA folks have looked at BPEL and find something interesting. It does well with machine-to-machine, or at least with designed-for-automated processes to trigger other automated processes based on various conditions and scenarios, and do it dynamically. But, the one piece that was missing was most processes are not 100 percent automated. There’s going to be some human input somewhere. It was pointed out that this is a major shortcoming of the BPEL spec.
So, IBM, SAP, Oracle, BEA, Adobe and Active Endpoints have put together a proposal to patch this gap, and we’re going to submit it to OASIS. We’re going to do with two pieces. One we’re going to call this BPEL4People. We’re going to add a stopping point to say, "Put a human task here." That’s essentially BPEL4People. It’s a little more than that, but essentially boils down to that.
In terms of the actual description of the task, the semantics of the task, this could be a whole separate standard called WS-HumanTask. Where I tend to see the value in this is that invoking a human task as a service is not necessarily a call for relationship with orchestration. You don’t necessarily have to orchestrate in order to invoke a human task.
What makes this little more interesting than a normal spec announcement is that it’s pretty controversial. It draws a lot of heated opinion, as you don’t sit on the fence on something like this. The BPM folks, who tend not to be IT folks, but more process analysts, said, “Heck, BPEL has never been robust enough for our needs. It’s too simple. It’s too much of a lowest common denominator. It doesn't represent subtleties of complex processes.”
So, you have this “tug-of-war.” The announcement of BPEL4People and WS‑HumanTask has simply not settled this. It’s brought the issue back even louder again. It just makes life kind of interesting here.
Biske: I think that we definitely need this. There's a constant tension with trying to take a business-process approach within IT when developing solutions. If you look at the products that are out there, you have one class of products that are typically called "workflow products" that deal with the human task management, and then you have these BPM Products or ESBs with orchestration in them that deal with the automated processes. Neither one, on their own, gives you the full view of the business process.
As a result, there’s always this awkward hand-off that has to occur between what the business user is defining as the business process and what IT has to turn around and actually cobble together as a solution around that. Finally getting to a point where we’re saying, "Okay, let’s come up with something that actually describes the true business process in the business definition of it," is really important. The challenge, though, is that it does potentially involve a fundamental change to the architecture of the solution.
It’s very different to develop a middleware product that can handle human workflow, because now you’ve got to have that state management. Previously, in an orchestration product, you didn't really have to worry about the state. The initial process gets kicked off, it automates that all the way through to the end, and you’re done. Then, you can release all of those resources for processing.
Now, you have to sit and go into this "wait" cycle for humans to do what they need to do, and you have to have a fundamentally different architecture for the solutions that provide that. It will be interesting to see when we actually see products that are claiming to support BPEL4People, what it changes to the landscape these vendors provide, and whether they have to take two products they previously had and combine them into one.
Any response to that, Jim Kobielus or Brad Shimmin?
Kobielus: That’s right, Dana, because if you look at the whole notion of orchestration, it implies a rule-driven flow of context and control throughout a distributed process. It’s very much the machine assembly-line metaphor, but if you look at actual business processes, they’re very unstructured or semi‑structured and dynamically self-redefining. In other words, most real-world workflows are a coordination or collaboration process and not really amenable to strict rule definition or strict flow definitions upfront. Everything is very ad hoc.
I am not very sanguine about the prospects for BPEL4People to take off in terms of actual adoption in the real world, in real human driven workflows. It’s just messy for standards.
Kobielus: There is a need for modeling tools that can help organizations to find roles, rules, and routes among human beings within workflows. I see the human workflow industry and the BPM market as being two distinct markets that don’t really benefit from a common standards framework. I am the jury that is still out on this whole issue.
Kobielus: They definitely are orthogonal.
Shimmin: I'm glad we’re doing this, although I also feel it’s weird that we’re pushing it out as two different standards, one with a really sad name, and the fact that if it takes the same course that we took with BPEL, it’s going to take another two years at least for this to become truly actionable.
Shimmin: No, the iPhone line was not to be confused with the BPEL4People line. This can be useful, if other standards come along for the ride, like BPMN. If we can pull those along together, then this is going to make a big difference for people. As Jim was saying, the idea of creating business processes that involves humans is nothing new, but it’s something that's very fleeting and hard to nail down. The folks who have been doing B2B integration for years have been looking at this problem and have been trying to solve it, because most of their tasks, like order-to-cash, has some sort of human aspect to it, no matter what.
Ricotta: It does seem like you’re not going to be able to realize the vision of SOA, unless you can work in the human aspect. I haven’t spent a lot of time with things like BPEL and the top levels of the SOA stack, so I can’t really comment about how workable it is, but it seems like it certainly has to be addressed somehow.
Baer: I'm not very sanguine about BPEL4People. The HumanTask probably has some potential interesting application, if you have a very simple task, a real commodity task that’s done often and you want to be able to reuse it.
The fact that it’s divorced from the BPEL4People stack, is probably a good thing, because there’s some use for this outside of that. I'm very leery about BPEL4People, and I think even the BPEL4People folks are not exactly sure of themselves either. The other thing I'll throw in, and I am not trying to imply any sort of ultimate solution, is that there are other approaches that are being attempted to solve the problem and to get around the bottleneck.
The analysts, the process folks, do take to modeling tools, because they provide a high-level picture of their processes. I don’t know what’s going to come of this, but you’re starting to see some efforts to make models executable. Now, it’s not going to boil the ocean or anything like that, but it’s an interesting approach and might have some niche uses.
Biske: Following up on Tony’s and Brad’s comments, from the enterprise perspective, I have much more interest in things like BPMN than I do in BPEL4People or even the original BPEL. Unlike some of the Web services specifications, when you are to hide a lot of that, developers had to deal with WSDL. There was no getting around it.
A business process developer doesn’t have to deal with BPEL. They’re dealing with some graphical interface that the BPM product has provided, and behind the scenes, that may be turned into BPEL. I may want it for portability, if I decide to change my business process engine, but the average developer shouldn’t even have to see that.
They do need to work on things like the modeling tools. So, the efforts around BPMN are much more important for enterprise developers. The BPEL space is probably of interest just to the vendors in the space, so they can promote some level of portability for these solutions across products. Or, if you’ve got a heterogeneous environment, can we make sure that we go across the environment. The average developer shouldn’t have to deal with it.
On iPhone Day and GPL v.3 Day ...
I suppose one of the things that’s caught my attention about this is, since the Microsoft-Novell covenant on patent issues and protection for Novell users of SUSE Linux distribution through Novell, the people who were drafting this new version of the license decided that there was a loophole that needed closing.
Apparently, new terms were designed to prevent a repeat of the Microsoft patent covenant with Novell, and also to extend any patent protection across anyone using the similar products under GPL v3. It’s a little bit murky, this license, as it comes out. Apparently, some people will be moving to this by default, without even knowing it. There are other people who have already put in a statement saying that we’re still going to adhere to GPL v2, and, therefore, not going to version 3. I think it’s going to be a challenge for those using, deploying, and managing open source to sort that out.
We’re also addressing some possible issues around the Sun Microsystems’ OpenSolaris kernel and operating system, and there might be an opportunity for them to come together in such a way that you could get OpenSolaris under a GPL v3 license. Sun has as much as said that they’re interested, but hasn’t committed.
There’s also an interesting aspect of this in that the Apache software license is going to be closer, and that there’s more agreement, so that developers who have the ability, can merge these two code bases without running afoul of, or being in violation of, either license. So, some potentially large impacts by the arrival of this new GPL v3.
Let’s go around the table and see what the impressions are. Tony Baer, do you think this is a big deal or does is cast more confusion? And, do you think it’s really politics more than technology?
Baer: My sense is it’s going to cast more confusion. I mean even Linus Torvalds has come out against GPL v3, saying that it puts too much of a straightjacket. I just think it’s just adding yet another new variant. If there were 50 Open Source licenses, I am just picking that number arbitrarily, today there are now 51.
Baer: Right, but the thing is whether it’s under GPL or whether it’s under GPL3. I haven’t followed this real closely, I would presume that if you’ve licensed your code or you’re licensing code under GPL v2, that it's not automatically advanced to v3, but correct me if I am wrong.
Baer: Okay, that would make sense. It’s going to create a lot of confusion, because obviously the Microsoft-Novell deal was very controversial. The idea that an open-source vendor would even concede that a non-open source vendor might have some intellectual property rights here, after the joke of the SCO lawsuit. Novell was trying to get a halo effect, saying, "Hey, we’ll protect all you SUSE Linux users," and instead it just incurred a lot of ire throughout the community which is saying, "You just admitted that we might have some transgressions here."
Baer: Then Microsoft said, “No, no, no, that was our statement.” So, I just don’t see this solving anything, I think it’s just adding a lot more turbulence to the waters.
Shimmin: The open-source community is playing right into the hands of those who are detractors of the open-source community with this GPL v3. So, I can understand why Linus Torvalds is against it, just from that perspective alone. Even if it closes a loophole, it doesn’t matter what it does, if it fractures an already shattered -- if I could say something bold -- landscape in terms of licensing.
The people or the companies I worry about mostly with this are the ISVs who are utilizing open-source software for their wares. Over the last five or six years, we’ve seen a huge upswing, and companies are making good money utilizing open source in their foundations. If you don’t have to build a J2EE Server, great. You can build something on top of that and make good money on it. But, now they’re going to be dissuaded a bit, and they’re going to have to look over their shoulder a lot more than they did in the past.
Shimmin: Well, go all the way to the MIT license, if you really want to take away any limitations and restrictions. You’re going to see a lot of vendors try to use software that is utilizing those more open licenses. If they can utilize something that’s going to not come back and bite them two years later, I would do it. Wouldn’t you?
Biske: It’s probably going to make things more complicated. Again, I have been involved with enterprises where, as they figure out how they were going to leverage open source, they had to get the legal department involved. The more complexities that are introduced into that environment, the longer it’s going to take and the more painful it’s going to be for developers who want to leverage some of these solutions.
While we haven’t seen any significant legal activity around the use of open source, with the exception of IBM and SCO efforts and some of the other things out there, you haven’t really seen any end users gone after. Should we get to that point, enterprises are really going to be running in terror from anything open source. I hope that doesn’t happen, because that’s really against everything that the open-source community is trying to achieve. I tend to be pragmatic on these things, and any time I see someone that’s really taking an extreme position, it gives me concern.
Ricotta: One thing I've observed, being part of IBM and product development, is that we devote a lot of resources, time, engineers, lawyers, and other people being very, very certain that if any open-source is used, we know about its origin and we can clear the legal hurdles. What this means is we’re going to have to spend even more resources doing that. I don’t know if there’s a solution in sight, but that’s our view of it.
Ricotta: Yeah, and companies, ISVs, or commercial producers of software will still utilize open source, but it’s going to be more work to do it, more expensive, and the clients, the enterprise architects and others who select the vendors, are going to have to ask more tough questions.
Ricotta: I don’t know about that, but it ends up raising the cost of development of software. That’s for sure.
We’ve been joined by Tony Baer, principal at onStrategies. Thanks, Tony.
Baer: Thanks, Dana.
Kobielus: It’s been a pleasure.
Shimmin: Thanks, Dana.
Biske: Thanks, Dana. Good luck with your iPhone.
Ricotta: Thank you for asking me, and a very good discussion. Glad to be part of it.
Listen to the podcast here. Produced as a courtesy of Interarbor Solutions: analysis, consulting and rich new-media content production. If any of our listeners are interested in learning more about BriefingsDirect B2B informational podcasts or to become a sponsor of this or other B2B podcasts, please fill free to contact Interarbor Solutions at 603-528-2435.
Transcript of Dana Gardner’s BriefingsDirect SOA Insights Edition, Vol. 21. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.