Thursday, May 09, 2013

Thomas Duryea's Journey to Cloud Part 2: Helping Leading Adopters Successfully Solve Cloud Risks

Transcript of a BriefingsDirect discussion on how a stepped approach helps an Australian IT service provider smooth the way to cloud benefits at lower risk for its customers.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Gardner
Our latest podcast discussion centers on how a leading Australian IT services provider, Thomas Duryea Consulting, has made a successful journey to cloud computing.

We'll learn how a cloud-of-clouds approach provides new IT services for Thomas Duryea's many Asia-Pacific region customers. Our discussion today continues a three-part series on how Thomas Duryea, or TD, designed, built and commercialized an adaptive cloud infrastructure.

The first part of our series addressed the rationale and business opportunity for TD's cloud-services portfolio, which is built on VMware software. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

This second installment focuses on how a variety of risks associated with cloud adoption and cloud use have been identified and managed by actual users of cloud services.

Learn more about how adopters of cloud computing have effectively reduced the risks of implementing cloud models. Here to share the story on this journey, we're joined once again by Adam Beavis, General Manager of Cloud Services at Thomas Duryea in Melbourne, Australia.
The question that many organizations keep coming back with is whether they should do cloud computing.

Welcome back, Adam.

Adam Beavis: Thank you, Dana. Pleasure to be here.

Gardner: Adam, we've been talking about cloud computing for years now, and I think it's pretty well established that we can do cloud computing quite well technically. The question that many organizations keep coming back with is whether they should do cloud computing. If there are certain risks, how do they know what risks are important? How do they get through that? What are you in learning so far at TD about risk and how your customers face that?

Beavis: People are becoming more comfortable with the cloud concept as we see cloud becoming more mainstream, but we're seeing two sides to the risks. One is the technical risks, how the applications actually run in the cloud.

Moving off-site

What we're also seeing -- more at a business level -- are concerns like privacy, security, and maintaining service levels. We're seeing that pop up more and more, where the technical validation of the solution gets signed off from the technical team, but then the concerns begin to move up to board level.

We're seeing intense interest in the availability of the data. How do they control that, now that it's been handed off to a service provider? We're starting to see some of those risks coming more and more from the business side.

Gardner: I've categorized some of these risks over the past few years, and I've put them into four basic buckets. One is the legal side, where there are licenses and service-level agreements (SLAs), issues of ownership, and permissions.

The second would be longevity. That is to say, will the service provider be there for the long term? Will they be a fly-by-the-seat-of-the-pants organization? Are they are going to get bought and maybe merged into something else? Those concerns.

The third bucket I put them in is complexity, and that has to do with the actual software, the technology, and the infrastructure. Is it mature? If it's open source, is there a risk for forking? Is there a risk about who owns that software and is that stable?
One of the big things that the legal team was concerned about was what the service level was going to be, and how they could capture that in a contract.

And then last, the long-term concern, which always comes back, is portability. You mentioned that about the data and the applications. We're thinking now, as we move toward more software-defined data centers, that portability would become less of an issue, but it's still top of mind for many of the people I speak with.

So let's go through these, Adam. Let's start with that legal concern. Do you have any organizations that you can reflect on and say, here is how they did it, here is how they have figured out how to manage these license and control of the IP risks?

Beavis: The legal one is interesting. As a case study, there's a not-for-profit organization for which we were doing some initial assessment work, where we validated the technical risk and evaluated how we were going to access the data once the information was in a cloud. We went through that process, and that went fine, but obviously it then went up to the legal team.

One of the big things that the legal team was concerned about was what the service level agreeement was going to be, and how they could capture that in a contract. Obviously, we have standard SLAs, and being a smaller provider, we're flexible with some of those service levels to meet their needs.

But the one that they really started to get concerned about was data availability ... if something were to go wrong with the organization. It probably jumps into longevity a little bit there. What if something went wrong and the organization vanished overnight? What would happen with their data?

Escrow clause

That's where we see legal teams getting involved and starting to put in things like the escrow clause, similar to what we had with software as a service (SaaS) for a long time. We're starting to see organizations' legal firms focus on doing these, and not just for SaaS -- but infrastructure as a service (IaaS) as well. It provides a way for user organizations to access their data if provider organizations like TD were to go down.

Beavis
So that's one that we're seeing at the legal level. Around the terms and conditions, once again being a small service provider, we have a little more flexibility in what we can provide to the organizations on those.

Once our legal team sits down and agrees on what they're looking for and what we can do for them, we're able to make changes. With larger organizations, where SLAs are often set in stone, there's no flexibility about making modifications to those contracts to suit the customer.

Gardner: Let's pause here for a second and learn more about TD for those listeners who might be new to our series. Tell us about your organization, how big you are, and who your customers are, and then we'll get back into some of these risks issues and how they have been managed.

Beavis: Traditionally, we came from a system-integrator background, based on the east coast of Australia -- Melbourne and Sydney. The organization has been around for 12 years and had a huge amount of success in that infrastructure services arena, initially with VMware.
Being a small service provider, we have a little more flexibility in what we can provide to the organizations.

Other companies heavily expanded into the enterprise information systems area. We still have a large focus on infrastructure, and more recently, cloud. We've had a lot of success with the cloud, mainly because we can combine that with a managed services.

We go to market with cloud. It's not just a platform where people come and dump data or an application. A lot of the customers that come into our cloud have some sort of managed service on top of that, and that's where we're starting to have a lot of success.

As we spoke about in part one, our customers drove us to start building a cloud platform. They can see the benefits of cloud, but they also wanted to ensure that for the cloud they were moving to, they had an organization that could support them beyond the infrastructure.

That might be looking after their operating systems, looking after some of their applications such as Citrix, etc. that we specialize in, looking after their Microsoft Exchange servers, once they move it to the cloud and then attaching those applications. That's where we are. That's the cloud at the moment.

Gardner: Just quickly revisiting those legal issues, are you finding that this requires collaboration and flexibility from both parties, learning the road that assuages risks for one party, but protects the other? Is this a back and forth activity? This surely requires some agility, but also some openness. Tell me about the culture at TD that allows you to do that well.

Personality types

Beavis: It does, because we're dealing with different personality types. The technical teams understand cloud and some love it and push for it. But once you get up to that corporate board level, the business level, some of the people up there may not understand cloud -- and might perceive it as more of a risk.

Once again, that's where that flexibility of a company like TD comes in. Our culture has always been "customers first," and we build the business around the longevity of their licenses. That's one of the core, underlying values of TD.

We make sure that we work with customers, so they are comfortable. If someone in the business at that level isn't happy, and we think it might have been the contract, we'll work with them. Our legal team will work with them to make sure we can iron that out, so that when they move across to cloud, everybody is comfortable with what the terms and conditions are.

Gardner: Moving toward this issue of longevity -- I suppose stability is another way to look at it -- is there something about the platform and industry-standard decisions that you've made that helps your customers feel more comfortable? Do they see less risk because, even though your organization is one organization, the infrastructure, is broader, and there's some stability about that that comes to the table?

Beavis: Definitely. Partnering with VMware was one of our core decisions, because their platform everywhere is end-to-end standard VMware. It really gives us an advantage when addressing that risk if organizations ask what happens if our company doesn't run or they're not happy with the service.
It's something that SaaS organizations have been doing for a long time, and we’re only just starting to see it more and more now when it comes to IaaS.

The great thing is that within our environment -- and it's one part of VMware’s vision -- you can then pick up those applications, and move them to another VMware cloud provider. Thank heaven, we haven't had that happen, and we intend it not to happen. But, for organizations to understand that, if something were to go wrong, they can move that to another service provider without having to re-architect those applications or make any major changes. This is one area where we're well getting around that longevity risk discussion.

Gardner: Any examples come to mind of organizations that have come to you with that sort of a question? Is there any sort of an example we can provide for how they were reducing the risk in their own minds, once they understood that extensibility of the standard platform?

Beavis: Once again, it was a not-for-profit organization recently where that happened. We documented the platform. We then gave them the advice of the escrow organizations, where they would have an end-to-end process. If something were to happen to TD, they would have an end-to-end process of how they would get their data, and have it restored on another cloud provider -- all running on common VMware infrastructure.

That made them more comfortable with what we were offering, the fact that there was a way out that that would not disappear. As I said, it's something that SaaS organizations have been doing for a long time, and we’re only just starting to see it more and more now when it comes to IaaS and cloud hosting.

Gardner: Now the converse of that would be that some of your customers who have been dabbling in cloud infrastructure, perhaps open-source frameworks of some kind, or maybe they have been integrating their own components of open-source available software, licensed software. What have you found when it comes to their sense of risk, and how does that compare to what we just described in terms of having stability and longevity?

More comfortable

Beavis: Especially in Australia, we probably have 85 percent to 90 percent of organizations with some sort of VMware in their data center. They no doubt seem to be more comfortable gravitating to some providers that are running familiar platforms, with teams familiar with VMware. They're more comfortable that we, as a service provider, are running a platform that they're used to.

We'll probably talk about the hybrid cloud a bit later on, but that ability for them to still maintain control in a familiar environment, while running some applications across in the TD cloud, is something that is becoming quite welcoming within organizations. So there's no doubt that choosing a common platform that they're used to working on is giving them confidence to start to move to the cloud.

Gardner: Do you have any examples of organizations that may have been concerned about platforms or code forking -- or of not having control of the maturity around the platform? Are there any real-life situations where the choice had to be made, weighing the pros and cons, but then coming down on the side of the established and understood platform?

Beavis: More organizations aren’t promoting what their platform is, and we’re not quite sure that it could be built on OpenStack or other platforms. We're not quite sure what they're running underneath.

We've had some customers say that some service providers aren’t revealing exactly what their platform is, and that was a concern to them. So it's not directed to any other platforms, but there's no doubt that some customers still want to understand what the underlying infrastructure is, and I think that will remain for quite a while.
As they are moving into cloud for the first time, people do want to know what that platform sitting there underneath is.

At the moment, as they are moving into cloud for the first time, people do want to know what that platform underneath is.

It also comes down to knowing where the data is going to sit as well. That's probably the big one we’re seeing more and more. That's been a bit of a surprise to me, the concerns people certainly have around things like data sovereignty and the Patriot Act. People are quite concerned about that, mainly because their legal teams are dictating to them where the data must reside. That can be anything from being state based or country based, where the data cannot leave the region that's been specified.

Gardner: I suppose this is a good segue into this notion of how to make your data, applications, and the configuration metadata portable across different organizations, based on some kind of a standard or definition. How does that work? What are the ways in which organizations are asking for and getting risk reduction around this concept of portability?

Beavis: Once again, it's about having a common way that the data can move across. The basics come into that hybrid-cloud model initially, like how people are getting things out. One of the things that we see more and more is that it's not as simple as people moving legacy applications and things up to the cloud.

To reduce that risk, we're doing a cloud-readiness assessment, where we come in and assess what the organization has, what their environment looks like, and what's happening within the environment, running things like the vCenter Operations tools from VMware to right-size those environments to be ready for the cloud.

Old data

We’re seeing a lot of that, because there's no point moving a ton of data out there, and putting it on live platforms that are going to cost quite a bit of money, if it's two or four years old. We’re seeing a lot of solution architects out there setting those environments before they move up.

Gardner: Is there a confluence between portability and what organizations are doing with disaster recovery (DR)? Maybe they're mirroring data and/or infrastructure and applications for purposes of business continuity and then are able to say, "This reduces our risk, because not only do we have better DR and business continuity benefits, but we’re also setting the stage for us to be able to move this where we want, when we want."

They can create a hybrid model, where they can pick and choose on-premises, versus a variety of other cloud providers, and even decide on those geographic or compliance issues as to where they actually physically place the data. That's a big question, but the issue is business continuity, as part of this movement toward a lower risk, how does that pan out?

Beavis: That's actually one of the biggest movements that we’re seeing at the moment. Organizations, when they refresh their infrastructure, don’t see the the value refreshing DR on-premise. Let the first step cloud be "let's move the DR out to the cloud, and replicate from on-premises out into our cloud."

Then, as you said, we have the advantage to start to do things like IaaS testing, understanding how those applications are going to work in the cloud, tweak them, get the performance right, and do that with little risk to the business. Obviously, the production machine will continue to run on-premises, while we're testing snapshots.
DR is still the number one use case that we're seeing people move to the cloud.

It's a good way to put a live snapshot of that environment, and how it’s going to perform in the cloud, how your users are going to access it, bandwidth, and all that type of stuff that you need to do before starting to run up. DR is still the number one use case that we’re seeing people move to the cloud.

Gardner: As we go through each of these risks, and I hear you relating how your customers and TD, your own organization, have reacted to them, it seems to me that, as we move toward this software-defined data center, where we can move from the physical hardware and the physical facilities, and move things around in functional blocks, this really solves a lot of these risk issues.

You can manage your legal, your SLAs, and your licenses better when you know that you can pick and choose the location. That longevity issue is solved, when you know you can move the entire block, even if it's under escrow, or whatever. Complexity and fear about forking or immaturity of the infrastructure itself can be mitigated, when you know that you can pick and choose, and that it's highly portable.

It's a round-about way of getting to the point of this whole notion of software-defined data center. Is that really at heart a risk reduction, a future direction, that will mitigate a lot of these issues that are holding people back from adopting cloud more aggressively?

Beavis: From a service provider's perspective it certainly does. The single-pane management window that you can do now, where you can control everything from your network -- the compute and the storage -- certainly reduces risk, rather than needing several tools to do that.

Backup integration

And the other area where the venders are starting to work together is the integration of things like backup, and as we spoke about earlier, DR. Tools are now sitting natively within that VMware stack around the software-defined data center, written to the vSphere API, as we're trying to retrofit products to achieve file-level backups within a virtual data center, within vCloud. Pretty much every day, you wake up there's a new tool that's now supported within that.

From a service provider's perspective it's really reducing the risk and time to market for the new offerings, but from a customer's perspective it's really getting in that experience that they used to. On-premise over a TD cloud, from their perspective, makes it a lot easier for them to start to adopt and consume the cloud.

Gardner: One last chance, Adam, for any examples. Are there any other companies that you would like to bring up that illustrate some of these risk-mitigation approaches that we've been discussing?

Beavis: Another one was a company, a medical organization. It goes back to what we were saying earlier. They had to get a DR project up and running. So they moved that piece to the cloud, and were unsure whether they would ever move any of their production data out. But six months after running DR in the cloud, we just started to provide some capacity.

The next thing was that they had a new project, putting in a new portal for e-learning. They decided for the first time, "We've got the capacity seeing over in the cloud. Let's start to do that." So they’ve started to migrate all their test and dev environment out there, because in their mind they reduced the risk around the up time in the cloud due to the success that had with the DR. They had all the statistics in reporting back on the stability of that environment.

Then, they became comfortable to move the next segment, which was the test and dev environment. And all things are going well. That application will run out of the cloud and will be their first application out there.
We have the team here that can really make sure we architect or build those apps correctly as they start to move them out.

That was a company that was very risk averse, and the DR project took a lot of getting across the line in the first case. We'll probably see that, in six to eight months, they're going to be running some of their core applications out of the cloud.

We'll start to see that more and more. The customers’ roadmap to the cloud will move from DR, maybe some test and dev, and new applications. Then, as that refresh comes up to the on-premise, they would be in a situation where they have completed the testing for those applications and feel comfortable to move them out to the cloud.

Gardner: That really sounds like an approach to mitigating risk, when it comes to the cloud, gradual adoption, learn, test, and then reapply.

Beavis: It is, and one of the big advantages we have at TD is the support around a lot of those applications, as people move out -- how Citrix is going to work in the cloud, how Microsoft Exchange is going to work in the cloud, and how their other applications will work. We have the team here that can really make sure we architect or build those apps correctly as they start to move them out.

So a lot of customers are comfortable to have a full-service service provider, rather than just a platform for them to throw everything across.

Gardner: Great. We've been discussing how a leading Australian IT service provider, Thomas Duryea Consulting, has made a successful journey to cloud computing. This sponsored second installment on how a variety of risks associated with cloud adoption have been identified and managed, comes via a three-part series on how TD designed, built and commercialized a vast cloud infrastructure built on VMware.

We've seen how, through a series of use case scenarios, a list of risks has been managed. And we also developed a sense of where risk as a roadmap can be balanced in terms of starting with disaster recovery and then learning from there. I thought there was really an interesting new insight to the market.

So look for the third and final chapter in our series soon, and we'll then explore the paybacks and future benefits that a cloud ecosystem provides for businesses. We'll actually examine the economics that compel cloud adoption.

With that, I’d like to thank our guest Adam Beavis, the General Manager of Cloud Services at Thomas Duryea Consulting in Melbourne, Australia. This was great, Adam. Thanks so much.

Beavis: Absolute pleasure.

Gardner: And of course, I would like to thank you, our audience, for joining as well. This is Dana Gardner, Principal Analyst at Interarbor Solutions.

Thanks again for listening, and don't forget to come back next time for the next BriefingsDirect podcast discussion.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how a stepped approach helps an Australian IT service provider smooth the way to cloud benefits at lower risk for its customers. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:


Monday, April 22, 2013

Service Virtualization Brings Speed Benefit and Lower Costs to TTNET Applications Testing Unit

Transcript of a BriefingsDirect podcast on how Türk Telekom subsidiary TTNET has leveraged Service Virtualization to significantly improve productivity.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing discussion of IT innovation and transformation.

Gardner
Once again we're focusing on how software improvements and advanced HP Service Virtualization (SV) solutions are enabling IT leaders to deliver better experiences and payoffs for businesses and end-users alike.

Today we’re going to learn about how TTNET, the largest internet service provider in Turkey, with six million subscribers, has significantly improved on applications deployment, while cutting costs and time to delivery.

With that, let's join our guest, Hasan Yükselten, Test and Release Manager at TTNET, which is a subsidiary of Türk Telekom, and they're based in Istanbul. Welcome to the show, Hasan.

Hasan Yükselten: Thank you.

Gardner: Before we get into this discussion of how you’ve used SV in your testing, what was the situation there before you became more automated and before you started to use more software tools? What was the process before that?

Yükselten: Before SV, we had to use the other party’s test infrastructures in our test cases. We're the leading ISP company in Turkey. We deploy more than 200 applications per year and we have to provide better and faster services to our customers every week and every month.

Yükselten
We mostly had problems on issues such as the accessibility, authorization, downtime, and private data for reaching the other third-party’s infrastructures. So, we needed virtualization on our test systems and we needed automation for getting fast deployment to make the release time shorter for greater virtualization. And of course, we needed to reduce our cost. So, we decided to solve the problems of the company by implementing SV.

Gardner: What did you do to begin this process of getting closer to a faster and automated approach? Did you do away with scripts? Did you replace them? How did you move from where you were to where you wanted to be?

Yükselten: Before SV, we couldn’t do automation, since the other parties are in discrete locations and it was difficult to reach the other systems. We could automate functional test cases, but for end-to-end test cases, it was impossible to do automation.

First, we implemented SV for virtualizing the other systems, and we put SV between our infrastructure and the third-party infrastructure. We learned the requests and responses and then could use SV instead of the other party infrastructure.

Automation tools

After this, we could also use automation tools. We managed to use automation tools via integrating Unified Functional Testing (UFT) and SV tools, and now we can run automation test cases and end-to-end test cases on SV.

Gardner: Was there anything about this that allowed you to have better collaboration between the developers and the testers. I know that in many companies, this is a linear progression, where they develop and then test, and it can be something that there's not a lot of communication on. Was there anything about what you've done that's improved how developers and testers have been able to coordinate and collaborate?

Yükselten: We started to use SV in our test systems first. When we saw the success, we decided to implement SV for the development systems also. But, we've just implemented SV in the development site, so I can't give results yet. We have to wait and see, for maybe one month, before I can reply to this question.

Gardner: Tell me about the types of applications that you’re using here as a large internet service provider. Are these internal apps for your organization? Are they facing out to the customers for billing, service procurement, and provisioning? Give me a sense of the type of applications we’re talking about?

Yükselten: We are mostly working on customer relationship management (CRM) applications. We deploy more than 200 applications per year and we have more than six million customers. We have to offer new campaigns and make some transformations for new customers, etc.

We have to save all the informations, and while saving the information, we also interact the other systems, for example the National Identity System, through telecom systems, public switched telephone network (PSTN) systems.

We have to ask informations and we need make some requests to the other systems. So, we need to use all the other systems in our CRM systems. And we also have internet protocol television (IPTV) products, value added services products, and the company products. But basically, we’re using CRM systems for our development and for our systems.

Gardner: So clearly, these are mission-critical applications essential to your business, your growth, and your ability to compete in your market.

Yükselten: If there is a mistake, a big error in our system, the next day, we cannot sell anything. We cannot do anything all over Turkey.

Gardner: Let's talk a bit about the adoption of your SV. Tell me about some of the products you’re using and some of the technologies, and then we’ll get into what this has done for you. But, let's talk about what you actually have in place so far.

Yükselten: Actually, it was very easy to adopt these products into our system, because including proof of concept (PoC), we could use this tool in six weeks. We spent first two weeks for the PoC and after four weeks, we managed to use the tool.

Easy to implement

For the first six weeks, we could use SV for 45 percent of end-to-end test cases. In 10 weeks, 95 percent of our test cases could be run on SV. It was very easy to implement. After that, we also implemented two other SVs in our other systems. So, we're now using three SV systems. One is for development, one is just for the campaigns, and one is for the E2E tests.

Gardner: Tell me how your relationship with HP Software has been. How has it been working with HP Software to attain this so rapidly?

Yükselten: HP Software helped us so much, especially R&D. HP Turkey helped us, because we were also using application lifecycle management (ALM) tools before SV. We were using QTP LoadRunners, QC, etc., so we had a good relation with HP Software.

Since SV is a new tool, we needed a lot of customization for our needs, and HP Software was always with us. They were very quick to answer our questions and to return for our development needs. We managed to use the tool in six weeks, because of HP’s Rapid Solutions.

Gardner: Let’s talk a little bit about the scale here. My understanding is that you have something on the order of 150 services. You use 50 regularly, but you're able to then spin up and use others on a more ad-hoc basis. Why is it important for you to have that kind of flexibility and agility?
We virtualized all the web services, but we use just what we need in our test cases.

Yükselten: As you say, we virtualized more than 150 services, but we use 48 of them actively. We use these portions of the service because we virtualized our third-party infrastructures for our needs. For example, we virtualized all the other CRM systems, but we don’t need all of them. In gateway remote, you can simulate all the other web services totally. So, we virtualized all the web services, but we use just what we need in our test cases.

Gardner: And this must be a major basis for your savings when you only use what you need. The utilization rate goes up, but your costs can go down. Tell us a little bit about how this has been an investment that’s paid back for you.

Yükselten: In three months we got the investment back actually, maybe shorter than three months. It could have been two and half months. For example, for the campaign test cases, we gained 100 percent of efficiency. Before HP, we could run just seven campaigns in a month, but after HP, we managed to run 14 campaigns in a month.

We gained 100 percent efficiency and three man-months in this way, because three test engineers were working on campaigns like this. For another example, last month we got the metrics and we saw that we had a total blockage for seven days, so that was 21 working days for March. We saved 33 percent of our manpower with SV and there are 20 test engineers working on it. We gained 140 man-months last month.

For our basic test scenarios, we could run all test cases in 112 hours. After SV, we managed to run it in 54 hours. So we gained 100 percent efficiency in that area and also managed to do automation for the campaign test cases. We managed to automate 52 percent of our campaign test cases, and this meant a very big efficiency for us. Totally, we saved more than $50,000 per month.

Broader applications

Gardner: That’s very impressive and that was in a relatively short period of time. Do you expect now to be able to take this to a larger set of applications, maybe beyond your organization, more generally across Türk Telekom?

Yükselten: Yes. Türk Telekom licenses these tools and started to use these tools in their test service to get this efficiency for those systems. We have a branch company called AVEA, and they also want to use this tool. After our getting this efficiency, many companies want to use this virtualization. Eight companies visited us in Turkey to get our experiences on this tool. Many companies want this and want to use this tool in their test systems.

Gardner: Do you have any advice for other organizations like those you've been describing, now that you have done this? Any recommendations on what you would advise others that might help them improve on how they do it?

Yükselten: Companies must know their needs first. For example, in our company, we have three blockage systems for third parties and the other systems don't change everyday. So it was easy to implement SV in our systems and virtualize the other systems. We don’t need to do virtualization day by day, because the other systems don't change every day.

Once a month, we consult and change our systems, update our web services on SV, and this is enough for us. But if the other party's systems changes day by day or frequently, it may be difficult to do virtualization every day.
Companies should think automation besides virtualization. This is also a very efficient aspect, so this must be also considered while making virtualization.

This is an important point. Companies should think automation besides virtualization. This is also a very efficient aspect, so this must be also considered while making virtualization.

Gardner: As to where you go next, do you have any thoughts about moving towards UFT, using cloud deployment models more? Where can you go more to attain more benefits and efficiencies?

Yükselten: We started to use UFT with integrating SV. As I told you, we managed to automate 52 percent of our campaign test cases so far. So we would like to go on and try to automate more test cases, our end-to-end test cases, the basic scenarios, and other systems.

Our first goal is doing more automation with SV and UFT and the other is using SV in development sites. We plan to find early defects in development sites and getting more quality products into the test.

Rapid deployment

Of course, in this way, we get rapid deployment and we make shorter release times because the product will have more quality. Using performance test and SV also helps us on performance. We use HP LoadRunner for our performance test cases. We have three goals now, and the last one is using SV with integrating LoadRunner.

Gardner: Well, it's really impressive. It sounds as if you put in place the technologies that will allow you to move very rapidly, to even a larger payback. So congratulations on that.

Well, Hasan, I'm afraid we’ll have to leave it there; we've run out of time. We’ve learned how TTNET the largest internet service provider in Turkey has significantly improved on mission-critical application deployment, while also cutting costs and reducing that important time to delivery.
We plan to find early defects in development sites and getting more quality products into the test.

I like to thank first our supporter for this series, HP Software, and remind our audience to carry on the dialogue on the Discover Performance Group on LinkedIn. Of course, I'd like to extend a huge thank you to our special guest Hasan Yükselten. He is the Test and Release Manager at TTNET, which is a subsidiary of Türk Telekom in Istanbul. Thanks so much. Hasan.

Yükselten: You're welcome, and thank you for your time too.

Gardner: And you can gain more insights and information on the best of IT Performance Management at www.hp.com/go/discoverperformance. And you can always access this and other episodes in our HP Discover performance podcast series on iTunes under BriefingsDirect.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I've been your host and moderator for this discussion part of our ongoing series on IT Innovation. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how Türk Telekom subsidiary TTNET has leveraged Service Virtualization to significantly improve productivity. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Tuesday, April 09, 2013

Agnostic Tool Chain Approach Proves Key to Fixing Broken State of Data and Information Management

Transcript of a BriefingsDirect podcast on how Dell Software is working with companies to manage internal and external data in all its forms.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Dell Software.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Gardner
Today, we present a sponsored podcast discussion on better understanding the biggest challenges businesses need to solve when it comes to data and information management.

We'll examine how a data dichotomy has changed the face of information management. This dichotomy means that organizations, both large and small, not only need to manage all of their internal data that provides intelligence about their businesses, but they also need to manage the reams of increasingly external big data that enables them to discover new customers and drive new revenue.

Lastly, our discussion will focus on bringing new levels of automation and precision to the task of solving data complexity by embracing an agnostic, end-to-end tool chain approach to overall data and information management.

Here now to share his insights on where the information management market has been and where it's going, we're joined by Matt Wolken, Executive Director and General Manager for Information Management at Dell Software. Welcome, Matt. [Disclosure: Dell Software is a sponsor of BriefingsDirect podcasts.]

Matt Wolken: Dana, thanks for having me. I appreciate it.

Gardner: From your perspective, what are the biggest challenges that businesses need to solve now when it comes to data and information management? What are the big hurdles that they're facing?

Wolken: It's an interesting question. When we look at customers today, we're noticing how their environments have significantly changed from maybe 10 or 15 years ago.

Wolken
About 10 or 15 years ago, the problem was that data was sitting in individual databases around the company, either in a database on the backside of an application, the customer relationship management (CRM) application, the enterprise resource planning (ERP) application, or in data marts around the company. The challenge was how to bring all this together to create a single cohesive view of the company?

That was yesterday's problem, and the answer was technology. The technology was a single, large data warehouse. All of the data was moved to it, and you then queried that larger data warehouse where all of the data was for a complete answer about your company.

What we're seeing now is that there are many complexities that have been added to that situation over time. We have different vendor silos with different technologies in them. We have different data types, as the technology industry overall has learned to capture new and different types of data -- textual data, semi-structured data, and unstructured data -- all in addition to the already existing relational data. Now, you have this proliferation of other data types and therefore other databases.

The other thing that we notice is that a lot of data isn't on premise any more. It's not even owned by the company. It's at your software-as-a-service (SaaS) provider for CRM, your SaaS provider for ERP, or your travel or human resources (HR) provider. So data again becomes siloed, not only by vendor and data type, but also by location. This is the complexity of today, as we notice it.

Cohesive view

All of this data is spread about, and the challenge becomes how do you understand and otherwise consume that data or create a cohesive view of your company? Then there is still the additional social data in the form of Twitter or Facebook information that you wouldn't have had in prior years. And it's that environment, and the complexity that comes with it, that we really would like to help customers solve.

Gardner: When it comes to this so-called data dichotomy, is it oversimplified to say it's internal and external, or is there perhaps a better way to categorize these larger sets that organizations need to deal with?

Wolken: There's been a critical change in the way companies go about using data, and you brought it out a little bit in the intro. There are some people who want to use data for an outcome-based result. This is generally what I would call the line-of-business concern, where the challenge with data is how do I derive more revenue out of the data source that I am looking at?

What's the business benefit for me examining this data? Is there a new segment I can codify and therefore market to? Is there a campaign that's currently running that is not getting a good response rate, and if so, do I want to switch to another campaign or otherwise improve it midstream to drive more real value in terms of revenue to the company?

That’s the more modern aspect of it. All of the prior activities inside business intelligence (BI) -- let’s flip those words around and say intelligence about the business -- was really internally focused. How do I get sanctioned data off of approved systems to understand the official company point of view in terms of operations?
How do I go out and use data to derive a better outcome for my business?

That second goal is not a bad goal. That's still a goal that's needed, and IT is still required to create that sanctioned data, that master data, and the approved, official sources of data. But there is this other piece of data, this other outcome that's being warranted by the line of business, which is, how do I go out and use data to derive a better outcome for my business? That's more operationally revenue-oriented, whereas the internal operations are around cost orientation and operations.

So where you get executive dashboards for internal consumption off of BI or intelligence for the business, the business units themselves are about visualization, exploration, and understanding and driving new insights.

It's a change in both focus and direction. It sometimes ends up in a conflict between the groups, but it doesn't really have to be that way. At least, we don't think it does. That's something that we try to help people through. How do you get the sanctioned data you need, but also bring in this third-party data and unstructured data and add nuance to what you are seeing about your company.

Gardner: Just as 10 or 15 years ago the problem to solve was the silos of data within the organization, is there any way in traditional technology offerings that allows this dichotomy to be joined now, or do we need a different way in which to create insights, using both that internal and external type of information?

Wolken: There are certainly ways to get to anything. But if you're still amending program after program or technology after technology, you end up with something less than the best path, and there might be new and better ways of doing things.

Agnostic tool chain

There are lots of ways to take a data warehouse forward in today's environment, manipulate other forms of data so it can enter a data warehouse or relational data warehouse, and/or go the other way and put everything into an unstructured environment, but there's also another way to approach things, and that’s with an agnostic tool chain.

Tools have existed in the traditional sense for a long time. Generally, a tool is utilized to hide complexity and all of the issues underneath the tool itself. The tool has intelligence to comprehend all of the challenges below it, but it really abstracts that from the user.

We think that instead of buying three or four database types, a structured database, something that can handle text, a solution that handles semi-structured or structured, or even a high performance analytical engine for that matter, what if the tool chain abstracts much of that complexity? This means the tools that you use every day can comprehend any database type, data structure type, or any vendor changes or nuances between platforms.

That's the strategy we’re pursuing at Dell. We’re defining a set of tools, not the underlying technologies or proliferation of technologies, but the tools themselves, so that the day-to-day operations are hidden from the complexity of those underlying sources of vendor, data type, and location.
We’re looking to enable customers to leverage those technologies for a smoother, more efficient, and more effective operation.

That's how we really came at it -- from a tool-chain perspective, as opposed to deploying additional technologies. We’re looking to enable customers to leverage those technologies for a smoother, more efficient, and more effective operation.

Gardner: Am I right then in understanding that this is at more of a meta level, above the underlying technologies, but that, in a sense, makes the whole greater than the sum of the parts of those technologies?

Wolken: That’s a fair way of looking at it. Let's just take data integration as a point. I can sometimes go after certain siloed data integration products. I can go after a data product that goes after cloud resources. I can get a data product that only goes after relational. I can get another data product to extract or load into Hive or Hadoop. But what if I had one that could do all of that? Rather than buying separate ones for the separate use cases, what if you just had one?

Metadata, in one way, is a descriptor language, if I use it in that sense. Can I otherwise just see and describe everything below it, or can I actually manipulate it as well? So in that sense, it's a real tool to actually manipulate and cause the effective change in the environment.

Gardner: I'd like to go into more of the challenges, but before we do that, what are the stakes here? What do you get if you do this right? If you can, in fact, manage across various technology types and formats, across relational and unstructured data, internal and external data sources and providers.

Are we talking iterative change, a step change, or is it something that is a bit larger and that we might have some other examples of companies when they do this well can really demonstrate something perhaps quite unique in terms of a new level of accomplishment?

Institutional knowledge

Wolken: There are a couple of ways we think about it, one of which is institutional knowledge. Previously, if you brought in a new tool into your environment to examine a new database type, you would probably hire a person from the outside, because you needed to find that skill set already in the market in order to make you productive on day one.

Instead of applying somebody who knows the organization, the data, the functions of the business, you would probably hire the new person from the outside. That's generally retooling your organization.

Or, if you switch vendors, that causes a shift as well. One primary vendor stack is probably a knowledge and domain of one of your employees, and if you switch to another vendor stack or require another vendor stack in your environment, you're probably going to have to retool yet again and find new resources. So that's one aspect of human knowledge and intelligence about the business.

There is a value to sharing. It's a lot harder to share across vendor environments and data environments if the tools can't bridge them. In that case, you have to have third-party ways to bridge those gaps between the tools. If you have sharing that occurs natively in the tool, then you don't have to cross that bridge, you don't have the delay, and you don't have the complexity to get there.

So there is a methodology within the way you run the environment and the way employees collaborate that is also accelerated. We also think that training is something that can benefit from this agnostic approach.
You're reaching across domains and you're not as effective as you would be if you could do that all with one tool chain.

But also, generically, if you're using the same tools, then things like master data management (MDM) challenges become more comprehensive, if the tool chain understands where that MDM is coming from, and so on.

You also codify how and where resources are shared. So if you have a person who has to provision data for an analyst, and they are using one tool to reach to relational data, another to reach into another type of data, or a third-party tool to reach into properties and SaaS environments, then you have an ineffective process.

You're reaching across domains and you're not as effective as you would be if you could do that all with one tool chain.

So those are some of the high-level ideas. That's why we think there's value there. If you go back to what would have existed maybe 10 or 15 years ago, you had one set of staff who used one set of tools to go back against all relational data. It was a construct that worked well then. We just think it needs to be updated to account for the variance within the nuances that have come to the fore as the technology has progressed and brought about new types of technology and databases.

Gardner: As for business benefits, we hear a lot about businesses being increasingly data driven and information driven, rather than a hunch, intuition, or gut instinct. Also, there's an ability to find new customers in much more cost-effective ways, taking advantage of the social networks, for example. So when you do this well, what are typically some of the business paybacks, and do they outweigh the cost more than previous investments in data would have?

Investment cycles

Wolken: It all depends on how you go about it. There are lots of stories about people who go on these long investment cycles into some massive information management strategy change without feeling like they got anything out of it, or at least were productive or paid back the fee.

There's a different strategy that we think can be more effective for organizations, which is to pursue smaller, bite-size chunks of objective action that you know will deliver some concrete benefit to the company. So rather than doing large schemes, start with smaller projects and pursue them one at a time incrementally -- projects that last a week and then you have 52 projects that you know derive a certain value in a given time period.

Other things we encourage organizations to do deal directly with how you can use data to increase competitiveness. For starters, can you see nuances in the data? Is there a tool that gives you the capability to see something you couldn't see before? So that's more of an analytical or discovery capability.

There's also a capability to just manage a given data type. If I can see the data, I can take advantage of it. If I can operate that way, I can take advantage of it.

Another thing to think about is what I would call a feedback mechanism, or the time or duration of observation to action. In this case, I'll talk about social sentiment for a moment. If you can create systems that can listen to how your brand is being talked about, how your product is being talked about in the environment of social commentary, then the feedback that you're getting can occur in real time, as the comments are being posted.
There's a feedback mechanism increase that also can then benefit from handling data in a modern way or using more modern resources to get that feedback.

Now, you might think you'll get that anyway. I would have gotten a letter from a customer two weeks from now in the postal system that provided me that same feedback. That’s true, but sometimes that two weeks can be a real benefit.

Imagine a marketing campaign that's currently running in the East, with a companion program in the West that's slightly different. Let's say it's a two-week program. It would be nice if, during the first week, you could be listening to social media and find out that the campaign in the West is not performing as well as the one in the East, and then change your investment thesis around the program -- cancel the one that's not performing well and double down on the one that's performing well.

There's a feedback mechanism increase that also can then benefit from handling data in a modern way or using more modern resources to get that feedback. When I say modern resources, generally that's pointing towards unstructured data types or textual data types. Again, if you can comprehend and understand those within your overall information management status, you now also have a feedback mechanism that should increase your responsiveness and therefore make your business more competitive as well.

Gardner: I think the whole concept of the immediacy to feedback, applied across various aspects of business -- planning, production, marketing, go-to market, research, and to uses -- then that's been the Holy Grail of business for a long time. It's just been very difficult to do. Now, we seem to be getting closer to the ability to do it at scale and at reasonable cost. So, these are very interesting times.

Now, given that these payoffs could be so substantial, what's preventing people from getting to this Holy Grail? What's between them and the realization?

It's the complexity

Wolken: I think it's complexity of the environment. If you only had relational systems inside your company previously, now you have to go out and understand all of the various systems you can buy, qualify those systems, get pure feedback, have some proofs of concept (POCs) in development, come in and set all these systems up, and that just takes a little bit of time. So the more complexity you invite into your environment, the more challenges you have to deal with.

After that, you have to operate and run it every day. That's the part where we think the tool chain can help. But as far as understanding the environment, having someone who can help you walk through the choices and solutions and come up with one that is best suited to your needs, that’s where we think we can come in as a vendor and add lots of value.

When we go in as a vendor, we look at the customer environment as it was, compare that to what it is today, and work to figure out where the best areas of collaboration can be, where tools can add the most value, and then figure out how and where can we add the most benefit to the user.

What systems are effective? What systems collaborate well? That's something that we have tried to emulate, at least in the tool space. How do you get to an answer? How do you drive there? Those are the questions we’re focused on helping customers answers.

For example, if you've never had a data warehouse before, and you are in that stage, then creating your first one is kind of daunting, both from a price perspective, as well as complexity perspective or know-how. The same thing can occur on really any aspect -- textual data, unstructured data, or social sentiment.
Those are some of the major challenges -- complexity, cost, knowledge, and know-how.

Each one of those can appear daunting if you don't have a skill set, or don't have somebody walking you through that process who has done it before. Otherwise, it's trying to put your hands on every bit of data and consume what you can and learning through that process.

Those are some of the things that are really challenging, especially if you're a smaller firm that has a limited number of staff and there's this new demand from the line of business, because they want to go off in a different direction and have more understanding that they couldn't get out of existing systems.

How do you go out and attain that knowledge without duplicating the team, finding new vendor tools, and adding complexity to your environment, maybe even adding additional data sources, and therefore more data-storage requirements. Those are some of the major challenges -- complexity, cost, knowledge, and know-how.

Gardner: It's interesting that you mentioned mid-market organizations. Some of these infrastructure and data investments were perhaps completely out of their reach until a new way to approach the problems through the tool chain, through cloud, through other services and on-demand offerings.

What is it now about the new approach to these problems that you think allows the fruits of this to be distributed more down market? Why are mid-market organizations now more able to avail themselves of some of these values and benefits than in the past?

Mid-market skills

Wolken: As the products are well-known, there is more trained staff that understands the more common technologies. There are more codified ways of doing things that a business can take advantage of, because there's a large skill set, and most of the employees may already have that skill set as you bring them into the company.

There are also some advantages just in the way technologies have advanced over the years. Storage used to be very expensive, and then it got a little cheaper. Then solid-state drives (SSD) came along and then that got cheaper as well. There are some price point advantages in the coming years, as well.

Dell overall has maintained the status that we started with when Michael Dell started recreating PCs in his dorm room from standard product components to bring the price down. That model of making technology attainable to larger numbers of people has continued throughout Dell’s history, and we’re continuing it now with our information management software business.

We’re constantly thinking about how we can reduce cost and complexity for our customers. One example would be what we call Quickstart Data Warehouse. It was designed to democratize data to a lower price point, to bring the price and complexity down to a much lower space, so that more people can afford and have their first data warehouse.

We worked with our partner Microsoft, as well as Dell’s own engineering team, and then we qualified the box, the hardware, and the systems to work to the highest peak performance. Then, we scripted an upfront install mechanism that allows the process to be up and running in 45 minutes with little more than directing a couple of IP addresses. You plug the box in, and it comes up in 45 minutes, without you having to have knowledge about how to stand up, integrate, and qualify hardware and software together for an outcome we call a data warehouse.
We're trying to hit all of the steps, and the associated costs -- time and/or personnel costs – and remove them as much as we can.

Another thing we did was include Boomi, which is a connector to automatically go out and connect to the data sources that you have. It's the mechanism by which you bring data into it. And lastly, we included services, in case there were any other questions or problems you had to set it up.

If you have a limited staff, and if you have to go out and qualify new resources and things you don't understand, and then set them up and then actually run them, that’s a major challenge. We're trying to hit all of the steps, and the associated costs -- time and/or personnel costs – and remove them as much as we can.

It's one way vendors like Dell are moving to democratize business intelligence a little further, bring it to a lower price point than customers are accustomed too and making it more available to firms that either didn’t have that luxury of that expertise link sitting around the office, or who found that the price point was a little too high.

Gardner: You mentioned this concept of the tool chain several times. I'd like to hear a bit more about why that approach works, and even more detail about what I understand to be important elements of it -- being agnostic to the data type, holistic management, complete view, and then of course integrate it.

In addition to the package, it sounds from your earlier comments that you want to be able to approach these daunting issues iteratively, so that you can bite off certain chunks. What is it about the tool chain that accomplishes both a comprehensive value, but also allows it to be adopted on a fairly manageable path, rather than all at once?

Wolken: One of the things we find advantageous about entering the market at this point in time is that we're able to look at history, observe how other people have done things over time, and then invest in the market with the realization that maybe something has changed here and maybe a new approach is needed.

Different point of view

Whereas the industry has typically gone down the path of each new technology or advancement of technology requires a new tool, a new product, or a new technology solution, we’ve been able to stand back and see the need for a different approach. We just have a different point of view, which is that an agnostic tool chain can enable organizations to do more.

So when we look at database tools, as an example, we would want a tool that works against all database types, as opposed to one that works against only a single vendor or type of data.

The other thing that we look at is if you walk into an average company today, there are already a lot of things laying around the business. A lot of investment has already been made.

We wanted to be able to snap in and work with all of the existing tools. So, each of the tools that we’ve acquired, or have created inside the company, were made to step into an existing environment, recognize that there were other products already in the environment, and recognize that they probably came from a different vendor or work on a different data type.

That’s core to our strategy. We recognize that people were already facing complexity before we even came into the picture, so we’re focused on figuring out how we snap into what they already have in place, as opposed to a rip-and-replace strategy or a platform strategy that requires all of the components to be replaced or removed in order for the new platform to take its place.
We’ve also assembled a tool chain in which the entirety of the chain delivers value as a whole.

What that means is tools should be agnostic, and they should be able to snap into an environment and work with other tools. Each one of the products in the tool chain we’ve assembled was designed from that point of view.

But beyond that, we’ve also assembled a tool chain in which the entirety of the chain delivers value as a whole. We think that every point where you have agnosticism or every point where you have a tool that can abstract that lower amount of complexity, you have savings.

You have a benefit, whether it’s cost savings, employee productivity, or efficiency, or the ability to keep sanctioned data and a set of tools and systems that comprehend it. The idea being that the entirety of the tool chain provides you with advantages above and beyond what the individual components bring.

Now, we're perfectly happy to help a customer at any point where they have difficultly and any point where our tools can help them, whether it's at the hardware layer, from the traditional Dell way, at the application layer, considering a data warehouse or otherwise, or at the tool layer. But we feel that as more and more of the portfolio – the tool chain – is consumed, more and more efficiency is enabled.

Gardner: It sounds as if rather than look at the ecosystem that’s in place in an organization as a detriment, you're trying to make that into an asset, and then even looking further to new products available to bring that in. So I guess partnering becomes important.

Already-made investment

Wolken: Everything is an already-made investment in the company. If the premise to rip and replace is from the get-go, then you're really removing the institutional knowledge, the training of the staff, and the investment into the product, not to mention maybe the integration work. That's not something we wanted to start out with. We wanted to recognize and leverage what was there and provide value to that already existing environment.

One of the core values that we were looking at from a design point is how do you fit into an environment and how do you add value to it, not how do you cause replacement or destruction of an existing environment in order to provide benefit.

Gardner: We have been talking about the tool chain in terms of its value for analytics and intelligence about the business and bringing in more types of data and information from external sources.

It also sounds to me as if this sets you up for a lifecycle benefits, not just on the business benefits, but also on the IT benefits, for things like a better backup and recovery, a better disaster recovery strategy, perhaps looking towards more storage efficiency. Is there an intramural benefit from the IT side to doing this in the fashion you have been describing as well?

Wolken: We looked at the strategy and said if you manage this as a data lifecycle, and that’s really what we think about it as, then where does data first show up in a company? That’s inside of a database on the backside of an application most likely.
Doing that, you also solve the problem of how to make sure that the data that was provisioned was sanctioned.

And where is it last used inside of a company? That would generally be just before retirement or long-term retention of the data. Then the question becomes how do you manipulate and otherwise utilize the data for the maximum benefit in the middle?

When we looked at that, one of the problems that you uncover is that there's a lot of data being replicated in a lot of places. One of the advantages that we've put together in the tool chain was to use virtualization as a capability, because you know where data came from and you know that it was sanctioned data. There's no reason to replicate that to disk in another location in the company, if you can just reach into that data source and pull that forward for a data analyst to utilize.

You can virtually represent that data to the user, without creating a new repository for that person. So you're saving on storage and replication costs. So if you’re looking for where is there efficiency in the lifecycle of data and how can you can cut some of those costs, that’s something that jumps right out.

Doing that, you also solve the problem of how to make sure that the data that was provisioned was sanctioned. By doing all of these things, by creating a virtual view, then providing that view back to the analyst, you're really solving multiple pieces of the puzzle at the same time. It really enables you to look at it from an information-management point of view.

Gardner: That's interesting, because you can not only get better business outcome benefits and analytics benefits, but you can simplify and reduce your total cost of ownership from the IT perspective. That's kind of another Holy Grail out there, to be able to do more with less.

One of the advantages

Wolken: That's what we think one of the advantages can be, and certainly, as you have the advantage to stand on the shoulders of people who have come before you and look at how the environment’s changed, you can notice some of these real minor changes and bring them forward. That's what we want to do with IT as partners and with the solution that we bring forward.

Gardner: How should enterprises and mid-market firms get started? Are there some proven initiation points, methods, or cultural considerations when one wants to move from that traditional siloed platform and integrate them along the way, an approach more towards this integrated, comprehensive tool-chain approach?

Wolken: There are different ways you can think about it. Generally, most companies aren’t just out there asking how they can get a new tool chain. That's not really the strategy most people are thinking about. What they are asking is how do I get to the next stage of being an intelligent company? How do I improve my maturity in business intelligence? How would I get from Excel spreadsheets without a data warehouse to a data warehouse and centralized intelligence or sanctioned data?

Each one of these challenges come from a point of view of, how do I improve my environment based upon the goals and needs that I am facing? How do I grow up as a company and get to be more of a data-based company?

Somebody else might be faced with more specific challenges, such a line of business is now asking me for Twitter data, and we have no systems or comprehension to understand that. That's really the point where you ask, what's going to be my strategy as I grow and otherwise improve my business intelligence environment, which is morphing every year for most customers.
It's about incremental improvement as well as tangible improvement for each and every step of the information management process.

That's the way that most people would start, with an existing problem and an objective or a goal inside the company. Generically, over time, the approach to answering it has been you buy a new technology from a new vendor who has a new silo, and you create a new data mart or data warehouse. But this is perpetuating the idea that technology will solve the problem. You end up with more technologies, more vendor tools, more staff, and more replicated data. We think this approach has become dated and inefficient.

But if, as an organization, you can comprehend that maybe there is some complexity that can be removed, while you're making an investment, then you free yourself to start thinking about how you can build a new architecture along the way. It's about incremental improvement as well as tangible improvement for each and every step of the information management process.

So rather than asking somebody to re-architect and rip and replace their tool chain or the way they manage the information lifecycle, I would say you sort of lean into it in a way.

If you're really after a performance metric and you feel like there is a performance issue in an environment, at Dell we have a number of resources that actually benchmark and understand the performance and where bottlenecks are in systems.

So we can look at either application performance management issues, where we understand the application layer, or we have a very deep and qualified set of systems around databases and data warehouse performance to understand where bottlenecks are either in SQL language or elsewhere. There are a number of tools that we have to help identify where a bottleneck or issue might be from just a pure performance perspective as well.

Strategic position

Gardner: That might be a really good place to start -- just to learn where your performance issues are and then stake out your strategic position based on a payback for improving on your current infrastructure, but then setting the stage for new capabilities altogether.

Wolken: Sometimes there’s an issue occurring inside the database environment. Sometimes it's at the integration layer, because integration isn’t happening as well as you think. Sometimes it's at the data warehouse layer, because of the way the data model was set up. Whatever the case, we think there is value in understanding the earlier parts of the chain, because if they’re not performing well, the latter parts of the chain can’t perform either.

And so at each step, we've looked at how you ensure the performance of the data. How do you ensure the performance of the integration environment? How do you ensure the performance of the data warehouse as well? We think if each component of the tool chain in working as well as it should be, then that’s when you enable the entirety of your solution implementation to truly deliver value.
At each step, we've looked at how you ensure the performance of the data.

Gardner: Great. I'm afraid we we'll have to leave it there. We're about out of time. You've been listening to a sponsored BriefingsDirect podcast discussion on better understanding the challenges businesses need to solve when it comes to improved data and information management.

And we have seen how organizations, not only need to manage all of their internal data that provides intelligence about the businesses, but also increasingly the reams of external data that enables them to improve on whole new business activities like discovering additional customers and driving new and additional revenue.

And we've learned more about how new levels of automation and precision can be applied to the task of solving data complexity and doing that to a tool chain of agnostic and capability.

I want to thank our guest. We have been here with Matt Wolken, Executive Director and General Manager for Information Management Software at Dell Software. Thanks so much, Matt.

Wolken: Thank you so much as well.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again to our audience for joining us, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Dell Software.

Transcript of a BriefingsDirect podcast on how Dell Software is working with companies to manage internal and external data in all its forms. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in: