Showing posts with label data virtualization. Show all posts
Showing posts with label data virtualization. Show all posts

Tuesday, August 28, 2012

Why Success Greets NYSE Euronext's Community Platform for Capital Markets Cloud

Transcript of a BriefingsDirect podcast from the 2012 VMworld Conference focusing on applying the cloud model to providing a range of services to the financial industry.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the 2012 VMworld Conference in San Francisco. We're here the week of August 27 to explore the latest in cloud computing and software-defined datacenter infrastructure developments.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout this series of VMware sponsored BriefingsDirect discussions.

It has been a full year since we first spoke to NYSE Euronext at the last VMworld Conference. We heard then about their Capital Markets Community Platform of vertical industry services cloud targeting the needs of Wall Street IT leaders.

As an early adopter of innovative cloud delivery and a groundbreaking cloud business model, we decided to go back and see how things have progressed at NYSE. We will learn now, a year on, how NYSE's specialized cloud offerings have matured, how the business of the financial services industry has received them, and explore how providing cloud services as a business has evolved.

We're joined by Feargal O'Sullivan, the Global Head of Alliances at NYSE Technologies. Welcome to BriefingsDirect, Feargal. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Feargal O'Sullivan: Thank you very much, Dana. Nice to be here.

Gardner: Tell me how it's going. The Capital Markets Community Platform, as we discussed, is a set of cloud services that you're providing to other IT organizations to help them better support their companies and their customers. How have things progressed over the past year?

O'Sullivan: We've been very happy with the progress we've made over the past year. When we announced at VMworld last year, we had just gone into early access for our first clients in our data center in the New York, New Jersey, Connecticut tri-state area, where we have all of our US-based markets running the New York Stock Exchange Markets, the Arca Electronic Markets, and AMEX.

That has since gone into production, has a number of clients on it, is being perceived very well by the community, and is really driving as a lynchpin of our strategy of building a global capital markets community.

Since the success of that, we've actually progressed further, to the point of having deployed the same environment in a second data center that we own and run just outside of London, in a town called Basildon, which is where we run all of our European markets, the Euronext side of NYSE Euronext.

We now have an equivalent VMware-based cloud environment and a range of ancillary services for the capital markets industry available in that location. Clients can now access, as a service, both infrastructure and platform capabilities in both of those facilities.

Furthermore, we've extended to two other financial centers in the world, one in Toronto and one in Tokyo. That's a slightly more stripped-down version of the community platform, but it's very useful for clients who are really expanding the business and gone globally.

Four locations

Now, we have those four locations up and running in production with production clients, so we are very happy with that progress.

Gardner: That's very impressive growth. In order to move this set of capabilities across these different geographies and in the data centers that you have created or acquired there has the whole software-defined datacenter model helped? I would think that in the older days -- 10 or 15 years ago with individually supported applications on individual stacks of hardware and storage -- that that would have been a far more difficult expansion project.

So what is it about the way that we're doing things now in the modern data center that's allowed you to build out so quickly?

O'Sullivan: Clearly, the technology has advanced significantly from the old days. The capability around virtualization on the the hardware server level with the VMware Hypervisors, and in particular the vCloud service, gives clients their own control over their environment.

Also on the networking side, it's become much more viable for clients to actually deploy into shared environment, still maintaining confidence that they're going to get both the security profile that they're looking for, as well as the performance capability.

We use the EMC VNX array with the FAST Cache capability to give a very stable performance profile based on demand. It allows different workloads, and yet each gets very good performance and response time. So there are many components along the way. Also, management and monitoring of these types of infrastructures have improved.

Our clients have certainly seen that enhancement in the technology. The financial services industry is unique in the way it leverages technology on two aspects.

One, security profile is absolutely critical. Security isn't just around customer data, but around application development and tools of the trade, intellectual property that firms might have, trading strategies, different analysis, analytics, and other types of components that they develop and build,. They feel they're highly proprietary in nature and don't want to allow anybody to get access to them. So they place security extremely high on the list.

The other unique aspect is performance aspect. It's a slightly different performance model from your typical sort of three-tier web store type of environment. Financial services, first of all, push very high volumes of content through their applications. They need to do so in microseconds, or at least milliseconds, of response time and latency measurements, and they also most importantly need to do so predictably.

With a big batch job of some kind, say a genetic folding job, you drop off a job, go away for 12 hours, and you come back. A little bit of clearly inefficient processing time is not great, because that drags out the whole thing over time, but there is no sort of critical "need it here," "need it now" requirement. So latency spikes are less of a problem.

Latency spikes

But in our industry, latency spikes are a real problem. People look for predictive latency, so we had to make sure that we applied a very tight security profile to our cloud, and a very high performance profile as well.

Gardner: So as you've expanded across different market regions and brought this into more of your portfolio for more of your customers, have you also increased the services? Last time, we talked about some services that were very impressive, but how have you been able to build on this cloud in terms of those value-added services that you deliver specifically to a financial clientele?

O'Sullivan: That's why we built our cloud, because there are many service providers who offer very valuable cloud capabilities that are based on core infrastructure and core computing capabilities, and they do so very well. However, we consider ourselves a vertical industry community. We're specifically focused on capital markets participants. We try to support and make it cheaper, more cost-effective, and more readily accessible to a wider range of participants to be able to get access to the markets.

So in our cloud and our community, we provide a range of platform and services that we have added. The core is "Come into our vCloud Director environment and access your compute infrastructure." By the way, we have a Compute On Demand Virtual Edition, we also have a Compute On Demand Physical Edition for those cases where that latency issue is of the utmost importance.

Then, we provide clients with the value-added features that we know they need, because they're in the capital markets business. The key one is market data. This is something that is absolutely critical in financial services, because every trade, no matter what you are buying or selling, always starts with a quote. Even if you walk into the shop and you ask how much it would it be for a can of soda, they say it's $1 or $1.20, whatever it is, and then you decide if you want to buy.

So in the financial services industry market data is the starting point, the driver of all the business. And the volumes on this, the sheer size of the content that comes down, is really outstanding. It's at the point now that even if you were to just subscribe to all North American equities and options, you'd need a 10-gigabit Ethernet pipe, and at points during the day, you're probably using upwards of 8 gigabits of that pipe just to get all that content.

Obviously, we can provide raw content, but we've added a range of services into our cloud and into the community. We can say, "We can offer you a nice filtered market data feed, where you just present us with the list of instruments you want, and we can add value-added calculations, do analytics, and provide that to you."

We've also developed an historical market-data access service. So if you want to go back and test your strategies against previous days of trading, back for many, many years, we have a database that's deployed in the cloud. So you can query the database, load it into your virtual environment, and analyze and back-test your strategies.

We've added order-routing capabilities, so when you are ready to send your orders to the market, if you are a market maker yourself, you might go direct to our gateway. If you're a sponsored participant, you might go through our risk-managed gateway, which would be sponsored by a broker.

Or if you are just a regular buy-side firm, a money manager, you might use our routing network and ask us to write your orders to the different brokers or the different markets, and we can handle that. Those are either ends of the trade.

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

Integration pieces

On Thursday, Aug. 30, I'm going to be presenting with VMware and EMC in one of the breakout sessions about us moving up the stack to start offering more of the integration pieces of this. We're using the Spring environment and a range of other VMware tools, GemFire, and so on, to demonstrate a full trading system deployed in the virtual environment with the integration tools -- all running hosted in our environment.

It's more of a framework that we're showing, but it provides platform as a service (PaaS), not just the market data in, which is our specialty, and the order routing out. Once you're within your environment, the range of additional tools makes it easy for you to develop and customize your own trading tools and your own trading strategies. That's something I will be talking about on Thursday.

Gardner: That's very interesting. It appears that what you've done here with your intermediary cloud is developed a fit-for-purpose value to such things as data services. Then, you've applied that to other value services like order services and now even integration services.

I think it's a harbinger of what we should expect in many other industries. Rather than a fire hose of either services or data, picking and choosing and letting an intermediary like yourselves provide that with the value-add, seems to be more efficient and valuable.

Looking at this as a value proposition, how has this been going as a business? Have you been enjoying uptake? I know you can't go into too much detail, but has the reception in the market satisfied your initial or hopeful business requirements around this as a business, as a profit and loss center?

O'Sullivan: The good news is that we've definitely had great progress here. We have a number of clients in all of the locations I mentioned. We're continuing to grow. It's a tough environment, as you can imagine, both just in the general economy and in particular in the financial services industry. So we expect to continue to grow this significantly further.

We have been certainly very happy with the uptake so far. We knew that we were going out well ahead of everybody else and we were very keen to do so, because we see and understand the vision that VMware and EMC in particular have been promoting over the past few years. We agree with it fully. We feel like we're uniquely positioned within the capital markets industry as the neutral party.

Remember, we're just a place where people go to trade. We don't decide what you buy or what you sell or how much it should be. We just provide the facility, the rules, and the oversight to ensure an orderly market. We wanted to make it easier and more cost-effective for firms to get access to that environment.

So by providing all of this capability, we think we're in a fantastic position now, that as more and more firms continue to explore virtualization and outsourcing of non-business critical functions, which for a while used to be running on your own servers, but which are now nothing but overhead.

We see them moving more and more into the cloud. We expect over the next two or three years, that this is really going to explode. We intend to be there, established, fully in production, tried and tested, and leading the industry from the front, as we think we should be with the a name like the New York Stock Exchange.

Well-known brand

That’s a brand that's so well-known globally. It's the best place to trade. It's the most reliable and most secure place to trade stocks, with the best oversight, and we want to apply that model to all of the services that we offer our clients.

Gardner: Let's drill just a little bit down into the notion of being able to add on these services, whether it's integration orders or data services. Is there something particular about the architecture that you've adopted that allows you to progress into these newer areas, maybe even in the future delivering feeds through a different format, satisfying needs around mobile devices, say HTML5.

I'm not focused so much on the application that you will be pursuing, but the ability to pursue more applications without necessarily a whole lot of additional infrastructure investment. How does that work?

O'Sullivan: The key for us was that we developed and built our own data center, which we operate and manage. It's a unique environment in Mahwah, New Jersey. We also built and developed our own in Basildon, just outside London. Those two facilities were built as Tier-4 guided data centers to the highest standards of reliability and security. Every time I go there, I'm amazed at the level of attention, the attention to detail that our engineers put into designing it to handle all sorts of occurrences.

The reason is that there is so much content created in these facilities. Traders gravitate towards liquidity, and we're a source of liquidity. We're probably the single biggest equity and options venue in North America, so traders are attracted to be there.

Given the electronic nature of the market, forgetting about high frequency trading, everything is electronic. So rather than take applications and deploy them in Timbuktu or wherever you choose to deploy your application, somewhere away from this facility and pay the expense of wide area network connections and so on, it makes more sense to deploy your applications close to the content that you care about.

If there is 8 gigabit bursts of market data on the network, why would you try to bring that 50 miles away to your own office? Why not take the applications that process that data and deploy them in there? With that sort of thought process in mind, we continue to build out a range of value-added services that we think clients would require.

We're also well aware that our main purpose in life is to be this neutral venue that creates markets and allows people to come and trade. So we're never going to be the best person, the best firm, or the best vendor at developing every possible requirement that every particular capital market’s participant might need. That's where our Global Alliance Program comes in.

I've been focused on working on our partnerships and ensuring that, as clients deploy into the cloud and they need market data, routing, risk management, back-office processing, and historical analysis. They also need different types of analytics, and they might need other services like email archiving and storage. They need to comply with regulation and so they need regulatory reporting services.

Not generic

There is such a wide range of capabilities required that are very specific. They're not generic. You're not going to go to some telco provider’s cloud and have all these firms that can offer you all these services there. There needs to be enough potential clients before a vendor is going to want to deploy their applications in this environment.

So we're building this community. We're basically saying that we have over 2,000 firms connected to our network, hundreds in our data centers. We have a wide range of vendors and we're continually working to add more so that it can offer services to those firms.

You can use our infrastructure, our cloud, and some of the integration capability that we've developed, both ourselves and through our relationships with vendors like VMware and EMC, to add on these capabilities that the firms are going to need and make a one-stop shop, a community, a place where you can go to get all the applications needed, similar to the app store model.

Gardner: You've defined what we should expect for public-cloud services. There is some thinking in the marketplace that there will be two or three public cloud providers, and everyone will go there, but I really think you have defined it by having a community close to their customers, recognizing that the architecture and the association with data and the integration is essential. Then, that value-add for applications and services on top of that means an ecosystem of cloud providers and not just a handful. So I really think you've painted the picture of the true future on cloud.

O'Sullivan: Thank you. We certainly see it that way. Our clients have taken us up on it already. While we still think it's early days, we're confident that we're going in the right direction, and that this will definitely, definitely take off in a big way, and within five years we will be looking back at how quaint this conversation was.

Gardner: I really enjoyed speaking with you, Feargal. We have been talking about the success of specialized vertical industry cloud delivery models and how they are changing the IT game in such mission critical industries as financial services.

I would like to thank our guest, Feargal O'Sullivan, the Global Head of Alliances at NYSE Technologies. Thank you, sir.

O'Sullivan: Thank you very much, Dana. I really appreciate the time to speak with you.

Gardner: And I also thank our audience for joining this special podcast coming to you from the 2012 VMworld Conference in San Francisco. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of podcast discussions. Thanks again for listening and come back next time.

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast from the 2012 VMworld Conference focusing on applying the cloud model to providing a range of services to the financial industry. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Friday, December 16, 2011

Stone Bond's Metadata Virtualization and Orchestration Improves Enterprise Data Integration Response Time and ROI

Transcript of a BriefingsDirect podcast on how businesses can better manage and exploit their exploding data via new technologies that provide meta-data-based data integration and management.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Stone Bond Technologies.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today we present a sponsored podcast discussion on the need to make sense of the deluge and complexity of the data and information that is swirling in and around modern enterprises. Most large organizations today are able to identify, classify, and exploit only a small portion of the total data and information within their systems and processes.

Perhaps half of those enterprises actually have a strategy for improving on this fact. But business leaders are now recognizing that managing and exploiting information is a core business competency that will increasingly determine their overall success. That means broader solutions to data distress are being called for.

We'll now then look at how metadata-driven data virtualization and improved orchestration can help provide the inclusivity and scale to accomplish far better data management. Such access then leads to improved integration of all information into an approachable resource for actionable business activities.

With us now to help better understand these issues -- and the market for solutions to these problems -- are our guests, Noel Yuhanna, Principal Analyst at Forrester Research. Welcome to BriefingsDirect, Noel.

Noel Yuhanna: Thanks.

Gardner: We're also here with Todd Brinegar, Senior Vice President for Sales and Marketing at Stone Bond Technologies. Welcome, Todd. [Disclosure: Stone Bond is a sponsor of BriefingsDirect podcasts.]

Todd Brinegar: Dana, how are you? Noel, great to hear you, too.

Gardner: Welcome to you both. Let me start with you, Noel. It's been said often, but it’s still hard to overstate, that the size and rate of growth of data and information is just overwhelming the business world. Why should we be concerned about this? It's been going on for a while. Why is it at a critical stage now to change how we're addressing these issues?

Yuhanna: Well, data has been growing significantly over the last few years because of different application deployments, different devices, such as mobile devices, and different environments, such as globalization. These are obviously creating a bigger need for integration.

We have customers who have 55,000 databases, and they plan to double this in the next three to four years. Imagine trying to manage 55,000 databases. It’s a nightmare. In fact, they don’t even know what the count is actually.

Then, they're dealing with unstructured data, which is more than 75 percent of the data. It’s a huge challenge trying to manage this unstructured data. Forget about the intrusions and the hackers trying to break in. You can’t even manage that data.

Then, obviously, we have challenges of heterogeneous data sources, structured, unstructured, semi-structured. Then, we have different database types, and then, data is obviously duplicated quite a lot as well. These are definitely bigger challenges than we've ever seen.

Different data sources

Gardner: We're not just dealing with an increase in data, but we have all these different data sources. We're still dealing with mainframes. We're still adding on new types of data from mobile devices and sensors. It has become overwhelming.

I hear many times people talking about big data, and that big data is one of the top trends in IT. It seems to me that you can’t just deal with big data. You have to deal with the right data. It's about picking and choosing the correct data that will bring value to the process, to the analysis, or whatever it is you're trying to accomplish.

So Noel, again, to you, what’s the difference between big data and right data?

Yuhanna: It’s like GIGO, Garbage In, Garbage Out. A lot of times, organizations that deal with data don’t know what data they're dealing with. They don’t know that it’s valuable data in the organization. The big challenge is how to deal with this data.

The other thing is making business sense of this data. That's a very important point. And right data is important. I know a lot of organizations think, "Well, we have big data, but then we want to just aggregate the data and generate reports." But are these reports valuable? Fifty percent of times they're not, and they've just burned away 1,000 CPU cycles for this big data.

That's where there's a huge opportunity for organizations that are dealing with such big data. First of all, you need to understand what this big data means, and ask are you going to be utilizing it. Throwing something into the big data framework is useless and pointless, unless you know the data.

Throwing something into the big data framework is useless and pointless, unless you know the data.



Gardner: Todd, reacting to what Noel just said about this very impressive problem, it seems that the old approaches, the old architectures, the connectors and the middleware, aren't going to be up to the task. Why do we have to think differently then about a solution set when we face this deluge, and also getting to the right data rather than just all the data regardless of its value?

Brinegar: Noel is 100 percent correct, and it is all about the right data, not just a lot of data. It’s interesting. We have clients that have a multiplicity of databases. Some they don’t even know about or no longer use, but there is relevant data in there.

Dana, when you were talking about the ability to attach to mainframes, all legacy systems, as well as incorporated into today’s environments, that's really a big challenge for a lot of integration solutions and a lot of companies.

So the ability to come in, attach, and get the right data and make that data actionable and make it matter to a company is really key and critical today. And being able to do that with the lowest cost of ownership in the market and the highest time to value equation -- so that the companies aren’t creating a huge amount of tech on top of the tech that they already have to get at this right data -- that’s really the key critical part.

Gardner: Noel, thinking about how to do this differently, I remember it didn’t seem that long ago when the solution to data integration was to create one big, honking database and try to put everything in there. Then that's what you'd use to crunch it and do your queries. That clearly was not going to work then, and it’s certainly not going to work now.

So what’s this notion about orchestrating, metadata, and data virtualization? Why are some of these architectural approaches being sought out, especially when we start thinking about the real-time issues?

Holistic data set

Yuhanna: You have to look at the holistic data set. Today, most organizations or business users want to look at the complete data sets in terms of how to make business decisions. Typically, what they're seeing is that data has always been in silos, in different repositories, and different data segregations. They did try to bring this all together like in a warehouse trying to deliver this value.

But then the volumes of data, the real-time data needs are definitely a big challenge. Warehouses weren't meant to be real-time. They were able to handle data, but not in real time.

So this whole data segregation delivers a yet even better superior framework to deliver real-time data and the right data to consumers, to processes, to applications, whether it’s structured data, semi-structured, unstructured data, all coming together from different sources -- not only on-premise, also off-premise, such as partner's data and marketplace data coming together and providing that framework toward different elements.

We talked about this many years ago and called it the information fabric, which is basically data virtualization that delivers this whole segregation of data in that layer, so that it could be consumed by different applications as a service, and this is all delivered in a real-time manner.

Now, an important point here is that it's not just read-only, but you can also write back through this virtualized layer, so that it can get back at the data.

We talked about this many years ago and called it the information fabric, which is basically data virtualization that delivers this whole segregation of data in that layer.



Definitely, things have changed with this new framework and there are solutions out there that offer this whole framework, not only just accessing data and integrating data, but they also have frameworks, which includes metadata, security, integration, transformation.

Gardner: How about that Todd Brinegar? When we think about a fabric, when we think about trying to access data, regardless, and get it closer to real time, what are the architectural approaches that you think are working better? What are you putting in place yourselves to try to solve this issue?

Brinegar: It's a great lead in from Noel, because this is exactly the fabric and the framework that Enterprise Enabler, Stone Bond’s integration technology, is built on.

What we've done is look at it from a different approach than traditional integration. Instead of taking old technologies and modifying those technologies linearly to effect an integration and bring that data into a staging database and then do a transformation and then massage it, we've looked at it three-dimensionally.

We attach with our AppComms, which are our connectors, to the metadata layer of an application. We don’t agent within the application. We get the at data of the data. We separate that data from multiple sources, unlimited sources, and orchestrate that to a view that a client has. It could be Salesforce.com, SharePoint, a portal, Excel spreadsheets, or anything that they're used to consuming that data in.

Actionable data

Gardner: Just to be clear, Todd, your architecture and solution approach is not only for access for analysis, for business intelligence (BI), for dashboards and insights -- but this is also for real-time running application sets. This is actionable data?

Brinegar: Absolutely. With Enterprise Enabler, we're not only a data-integration tool, we're an applications-integration tool. So we are EAI/ETL. We cover that full spectrum of integration. And as you said, it is the real-time solution, the ability to access and act on that information in real time.

Gardner: We described why this is a problem and why it's getting worse. We've looked at one approach to ameliorating these issues. But I'm interested in what you get if you do this right.

Let's go back to Noel. For some of the companies that you work with at Forrester, that you are familiar with, the enterprises that are looking to really differentiate themselves, when they get a better grasp of their data, when they can make it actionable, when they can pull it together from a variety of sources, old and new, on-premises and off-premises, how impactful is this? What sort of benefits are they able to gain?

Yuhanna: The good thing about data virtualization is that it's not just a single benefit. There are many, many benefits of data virtualization, and there are customers who are doing real-time BI, business with data virtualization. As I mentioned, there are drawbacks and limitations in some of the older approaches, technologies, and architectures we've used for decades.

Real-time BI is definitely one of the big drivers for data virtualization, but also having a single version of the truth.



We want real-time BI, in the sense that you can’t just wait a day for this report to show up. You need this every hour or every minute. So these are important decisions you've got to make for that.

Real-time BI is definitely one of the big drivers for data virtualization, but also having a single version of the truth. As you know, more than 30 percent of data is duplicated in an organization. That’s a very conservative number. Many people don’t know how much data is duplicated.

And you have different duplication of data -- customer data, product data, or internal data. There are many different types of data that is duplicated. Then the data has a quality issue, because you may change customer data in one of the applications that may touch one database, but the other database is not synchronized as such. What you get is inconsistent data, and customers and other business users don’t really value the data actually anymore.

A single version of the truth is a very important deliverable from solutions, which has never been done before, unless you have one single database actually, but most organizations have multiple databases.

Also it's creating this whole dashboard. You want to get data from different sources, be able to present business value to the consumers, to the business users, what have you, and the other cases like enterprise search, you're able to search data very quickly.

Simpler compliance

Imagine if an auditor walks into an organization, they want to look at data for a particular event, or an activity, or a customer, searching across a thousand resources. It could be a nightmare. The compliance initiative through data virtualization becomes a lot simpler.

Then, you're doing things like content-management applications, which need to be delivered in federation and integrate data from many sources to present more valuable information. Also, smart phones and mobile devices want data from different systems so that they all tie together to their consumers, to the business users, effectively.

So data virtualization has quite a strong value proposition and, typically, organizations get the return on investment (ROI) within six months or less with data virtualization.

Gardner: Todd, at Stone Bond, when you look to some of your customers, what are some of the salient paybacks that they're looking for? Is there some low-hanging fruit, for example? It sounds from what Noel said that there are going to be payoffs in areas you might not even have anticipated, but what are the drivers? What are the ones that are making people face the facts when it comes to data virtualization and get going with it?

Brinegar: With Stone Bond and our technology Enterprise Enabler the ability to virtualize, federate, orchestrate, all in real-time is a huge value. The biggest thing is time to value though. How quickly can they get the software configured and operational within their enterprise? That is really the key that is driving a lot of our clients’ actions.

When we do an installation, a client can be up and operational doing their first integration transformations within the first day.



When we do an installation, a client can be up and operational doing their first integration transformations within the first day. That’s a huge time-to-value benefit for that client. Then, they can be fully operational with complex integration in under three weeks. That's really astounding in the marketplace.

I have one client that on one single project calculated $1.5 million cost savings in personnel in the first year. That’s not even taking into account a technology that they may be displacing by putting in Enterprise Enabler. Those are huge components.

Gardner: How about some examples Todd, use cases? I know sometimes you can name companies and sometimes you can't, but if you do have some names that you can share about what the data virtualization value proposition is doing for them, great.

Brinegar: HP is a great example. HP runs Enterprise Enabler in their supply chain for their Enterprise Server Group. That group provides data to all the suppliers within the Enterprise Server Group on an on-time basis.

They are able to build on demand and take care of their financials in the manufacturing of the servers much more efficiently than they ever have. They were experiencing, I believe, a 10-times return on investment within the first year. That’s a huge cost benefit for that organization. It's really kept them a great client of ours.

We do quite a bit of work in the oil business and the oil-field services business, and each one of our clients has experienced a faster ROI and a lower total cost of ownership (TCO).

We just announced recently that most of our clients experienced a 300 percent ROI in the first year that they implemented Enterprise Enabler. CenterPoint Energy is a large client of Stone Bond and they use us for their strategic transformation of how they're handling their data.

How to begin

Gardner: Let’s go back to Noel. When it comes to getting started, because this is such a big problem, many times it’s trying to boil the ocean, because of all the different data types, the legacy involvement. Do you have a sense of where companies that are successful at doing this have begun?

Is there a pattern, is there a methodology that helps them get moving toward some of these returns that Todd is talking about, that data virtualization is getting these assets into the hands of people who can work with them? Any thoughts about where you get started, where you begin your journey?

Yuhanna: One is taking an issue, like an application-specific strategy, and building blocks on that, or maybe just going out and looking at an enterprise-wide strategy. For the enterprise-wide strategy, I know that some of the large organizations in the financial services, retail, and sales force are starting to embark on looking at all of these data in a more holistic manner:

"I've got customer data that is all over the place. I need to make it more consistent. I need to make it more real-time." Those are the things that I'm dealing with, and I think those are going to be seen more in the coming years.

Obviously, you can’t boil the ocean, but I think you want to start with some data which becomes more valuable, and this comes back to the point that you talked about as the right data. Start with the right data and look at those data points that are being shared and consumed by many users, business users, and that’s going to be valuable for the business itself.

I would definitely recommend looking at newer technologies, because they definitely are faster. They do a lot of caching. They do a lot of faster integration.



The important thing is also that you're building this block on the solution. You can definitely leverage some existing technologies, if you wanted to. I would definitely recommend now looking at newer technologies, because they definitely are faster. They do a lot of caching. They do a lot of faster integration.

As Todd was mentioning, quicker ROI is important. You don’t have to wait for a year trying to integrate data. So I think those are critical for organizations going forward. But you also have to look at security, availability, and performance. All of these are critical, when you're making decisions about what your architecture is going look like.

Gardner: Noel, you do a lot of research at Forrester. Are there any reports, white papers, or studies that you could point to that would help people as they are starting to sort through this to decide where to start, where the right data might be?

Yuhanna: We've actually done extensive research over the last four or five years on this topic. If you look at Information Fabric, this is a reference architecture we've told customers to use when you're building a data virtualization yourself. You can build the data virtualization yourself, but obviously it will take a couple of years to build. It’s a bit complex to build, and I think that's why solutions are better at that.

But Information Fabric reports are there. Also, information as a service is something that we've written about -- best practices, use cases, and also vendor solutions around this topic of discussion. So information as a service is something that customers could look at and gain understanding.

Case studies

We have use cases or case studies that talk about the different types of deployments, whether it’s a real-time BI implementations or doing single version of fraud detection, or any other different types of environments they're doing. So we definitely have case studies as well.

There are case studies, reference architectures, and even product surveys, which talk about all of these technologies and solutions.

Gardner: Todd, how about at Stone Bond? Do you have some white papers or research, reports that you can point to in order to help people sort through this and perhaps get a better sense of where your technologies are relevant and what your value is?

Brinegar: We do. On our website, stonebond.com, we have our CTO's blogs, Pamela Szabó's blog, which have a great perspective of data, big data, and the changing face of data usage and virtualization.

I wish everybody would explore the different opportunities and the different technologies that there are for integration and really determine not what you need today -- that’s important -- but what will you need tomorrow. What’s the tech that you're going to carry forward, and how much is the TCO going to be as you move forward, and really make that value decision past that one specific project, because you're going to live with the solution for a long time.

I wish everybody would explore the different opportunities and the different technologies that there are for integration and really determine not what you need today . . . but what will you need tomorrow.



Gardner: Very good. We've been listening to a sponsored podcast discussion on the need to make sense of the deluge and the complexity of data and information swirling in and around modern enterprises. We've also looked at how better data access can lead to improved integration of all information into approachable resources for actual business activities and intelligence.

I want to thank our guests, Noel Yuhanna, Principal Analyst at Forrester Research. Thanks so much, Noel.

Yuhanna: Thanks a lot.

Gardner: And also Todd Brinegar, the Senior Vice President of Sales and Marketing at Stone Bond Technologies. Thanks to you too, Todd.

Brinegar: Much appreciated. Thank you very much, Dana. Thank you very much, Noel.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Stone Bond Technologies.

Transcript of a BriefingsDirect podcast on how businesses can better manage and exploit their exploding data via new technologies that provide meta-data-based data integration and management. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Wednesday, November 30, 2011

Big Data Meets Complex Event Processing: AccelOps Delivers a Better Architecture to Attack the Data Center Monitoring and Analytics Problem

Transcript of a BriefingsDirect podcast on how enterprises can benefit from capturing and analyzing systems data to improve IT management in real-time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: AccelOps.

Connect with AccelOps: Linkedin, Twitter, Facebook, RSS.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how new data and analysis approaches are significantly improving IT operations monitoring, as well as providing stronger security. We'll examine how advances in big data analytics and complex events processing (CEP) can come together to provide deep and real-time, pattern-based insight into large-scale IT operations.

AccelOps has developed the technology to correlate events with relevant data across IT systems, so that operators can gain much better insights faster, and then learn as they go to better predict future problems before they emerge. [Disclosure: AccelOps is a sponsor of BriefingsDirect podcasts.]

With us now to explain how these new solutions can drive better IT monitoring and remediation response -- and keep those critical systems performing at their best -- is our guest, Mahesh Kumar, Vice President of Marketing at AccelOps. Welcome to BriefingsDirect, Mahesh.

Mahesh Kumar: Dana, glad to be here.

Gardner: It's always been difficult to gain and maintain comprehensive and accurate analysis of large-scale IT operations, but it seems, Mahesh, that this is getting more difficult. I think there have been some shifts in computing in general in these environments that makes getting a comprehensive view of what’s going on perhaps more difficult than ever. Is that fair in your estimation?

Kumar: Absolutely, Dana. There are several trends that are fundamentally questioning existing and traditional ways of monitoring a data center.

Gardner: Of course we're seeing lots of virtualization. People are getting into higher levels of density, and so forth. How does that impact the issue about monitoring and knowing what’s going on with your systems? How is virtualization a complexity factor?

Kumar: If you look at trends, there are on average about 10 virtual machines (VMs) to a physical server. Predictions are that this is going to increase to about 50 to 1, maybe higher, with advances in hardware and virtualization technologies. So that’s one trend, the increase in density of VMs is a complicating factor for capacity planning, capacity management, performance management, and security.

Corresponding to this is just the sheer number of VMs being added in the enterprise. Analysts estimate that just in the last few years, we have added as many VMs as there were physical machines. In a very short period of time, you have in effect seen a doubling of the size of the IT management problem. So there are a huge number of VMs to manage and that introduces complexity and a lot of data that is created.

Moreover, your workloads are constantly changing. vMotion and DRS are causing changes to happen in hours, minutes, or even seconds, whereas in the past, it would take a week or two for a new server to be introduced, or a server to be moved from one segment of the network to the other.

So change is happening much more quickly and rapidly than ever before. At the very least, you need monitoring and management that can keep pace with today’s rate of change.

Cloud computing

Cloud computing is another big trend. All analyst research and customer feedback suggests that we're moving to a hybrid model, where you have some workloads on a public cloud, some in a private cloud, and some running in a traditional data center. For this, monitoring has to work in a distributed environment, across multiple controlling parties.

Last but certainly not the least, in a hybrid environment, there is absolutely no clear perimeter that you need to defend from a security perspective. Security has to be pervasive.

Given these new realities, it's no longer possible to separate performance monitoring aspects from security monitoring aspects, because of the distributed nature of the problem. You can’t have two different sets of eyes looking at multiple points of presence, from different angles and then try to piece that together.

Those are some of the trends that are causing a fundamental rethink in how IT monitoring and management systems have to be architected.

Gardner: And even as we're seeing complexity ramp-up in these data centers, many organizations are bringing these data centers together and consolidating them. At the same time, we're seeing more spread of IT into remote locations and offices. And we're seeing more use of mobile and distributed activities for data and applications. So we're not only talking about complexity, but we're talking about scale here.

Every office with voice over IP (VoIP) phones needs some servers and network equipment in their office, and those servers and network equipment have to be secured and their up-time guaranteed.



Kumar: And very geographically distributed scale. To give you an example, every office with voice over IP (VoIP) phones needs some servers and network equipment in their office, and those servers and network equipment have to be secured and their up-time guaranteed.

So what was typically thought of as a remote office now has a mini data center, or at least some elements of a data center, in it. You need your monitoring and management systems to have the reach and can easily and flexibly bring those under management and ensure their availability and security.

Gardner: What are some of the ways that you can think about this differently? I know it’s sort of at a vision level, but typically in the past, people thought about a system and then the management of that system. Now, we have to think about clouds and fabrics. We're just using a different vocabulary to describe IT. I suppose we need to have a different vocabulary to describe how we manage and monitor it as well.

Kumar: The basic problem you need to address is one of analysis. Why is that? As we discussed earlier, the scale of systems is really high. The pace of change is very high. The sheer number of configurations that need to be managed is very large. So there's data explosion here.

Since you have a plethora of information coming at you, the challenge is no longer collection of that information. It's how you analyze that information in a holistic manner and provide consumable and actionable data to your business, so that you're able to actually then prevent problems in the future or respond to any issues in real-time or in near real-time.

You need to nail the real-time analytics problem and this has to be the centerpiece of any monitoring or management platform going forward.

Fire hose of data

Gardner: In the past, this fire hose of data was often brought into a repository, perhaps indexed and analyzed, and then over time reports and analysis would be derived from it. That’s the way that all data was managed.

But we really can't take the time to do that, especially when we have to think about real-time management. Is there a fundamental change in how we approach the data that’s coming from IT systems in order to get a better monitoring and analysis capability?

Kumar: The data has to be analyzed in real-time. By real-time I mean in streaming mode before the data hits the disk. You need to be able to analyze it and make decisions. That's actually a very efficient way of analyzing information. Because you avoid a lot of data sync issues and duplicate data, you can react immediately in real time to remediate systems or provide very early warnings in terms of what is going wrong.

The challenges in doing this streaming-mode analysis are scale and speed. The traditional approaches with pure relational databases alone are not equipped to analyze data in this manner. You need new thinking and new approaches to tackle this analysis problem.

Gardner: Also for issues of security, you don't want to find out about security weaknesses by going back and analyzing a bunch of data in a repository. You want to be able to look and find correlations about what's going on, where attacks might be originating, and how that might be affecting different aspects of your infrastructure.

Attackers may hijack an account or gain access to a server, and then over time, stealthily, be able to collect or capture the information that they are after.



People are trying different types of attacks. So this needs to be in real-time as well. It strikes me that if you want to solve security as well as monitoring, that that is also something that has to be in real-time and not something that you go back to every week or month.

Kumar: You might be familiar with advanced persistent threats (APTs). These are attacks where the attacker tries their best to be invisible. These are not the brute-force attacks that we have witnessed in the past. Attackers may hijack an account or gain access to a server, and then over time, stealthily, be able to collect or capture the information that they are after.

These kinds of threats cannot be effectively handled only by looking at data historically, because these are activities that are happening in real-time, and there are very, very weak signals that need to be interpreted, and there is a time element of what else is happening at that time. What seems like disparate sets of activity have to be brought together to be able to provide a level of defense or a defense mechanism against these APTs. This too calls for streaming-mode analysis.

If you notice, for example, someone accessing a server, a database administrator accessing a server for which they have an admin account, it gives you a certain amount of feedback around that activity. But if on the other hand, you learn that a user is accessing a database server for which they don’t have the right level of privileges, it may be a red flag.

You need to be able to connect this red flag that you identify in one instance with the same user trying to do other activity in different kinds of systems. And you need to do that over long periods of time in order to defend yourself against APTs.

Advances in IT

Gardner: So we have the modern data center, we have issues of complexity and virtualization, we have scale, we have data as a deluge, and we need to do something fast in real-time and consistently to learn and relearn and derive correlations.

It turns out that there are some advances in IT over the past several years that have been applied to solve other problems that can be brought to bear here.

This is one of the things that really jumped out at me when I did my initial briefing with AccelOps. You've looked at what's being done with big data and in-memory architectures, and you've also looked at some of the great work that’s been done in services-oriented architecture (SOA) and CEP, and you've put these together in an interesting way.

Let's talk about what the architecture needs to be in order to start doing for IT what we have been doing with retail data or looking at complex events in a financial environment to derive inference into what's going on in the real world. What is the right architecture, now that we need to move to for this higher level of operations and monitoring?

Kumar: Excellent point, Dana. Clearly, based on what we've discussed, there is a big-data angle to this. And, I want to clarify here that big data is not just about volume.

A single configuration setting can have a security implication, a performance implication, an availability implication, and even a capacity implication in some cases.



Doug Laney, a META and a Gartner analyst, probably put it best when he highlighted that big data is about volume, the velocity or the speed with which the data comes in and out, and the variety or the number of different data types and sources that are being indexed and managed. I would add to this a fourth V, which is verdicts, or decisions, that are made. How many decisions are actually impacted or potentially impacted by a slight change in data?

For example, in an IT management paradigm, a single configuration setting can have a security implication, a performance implication, an availability implication, and even a capacity implication in some cases. Just a small change in data has multiple decision points that are affected by it. From our angle, all these different types of criteria affect the big data problem.

When you look at all these different aspects of IT management and how it impacts what essentially presents itself as a big data challenge or a big data problem, that’s an important angle that all IT management and monitoring products need to incorporate in their thinking and in their architectures, because the problem is only going to get worse.

Gardner: Understanding that big data is the issue, and we know what's been done with managing big data in this most comprehensive definition, how can we apply that realistically and practically to IT systems?

It seems to me that you are going to have to do more with the data, cleansing it, discovering it, and making it manageable. Tell me how we can apply the concepts of big data that people have been using in retail and these other applications, and now point that at the IT operations issues and make it applicable and productive.

Couple of approaches

Kumar: I mentioned the analytics ability as central to monitoring systems – big-data analytics to be specific. There are a couple of approaches. Some companies are doing some really interesting work around big-data analysis for IT operations.

They primarily focus on gathering the data, heavily indexing it, and making it available for search, thereby derive analytical results. It allows you to do forensic analysis that you were not easily able to with traditional monitoring systems.

The challenge with that approach is that it swings the pendulum all the way to the other end. Previously we had a very rigid, well-defined relational data-models or data structures, and the index and search approach is much more of a free form. So the pure index-and-search type of an approach is sort of the other end of the spectrum.

What you really need is something that incorporates the best of both worlds and puts that together, and I can explain to you how that can be accomplished with a more modern architecture. To start with, we can't do away with this whole concept of a model or a relationship diagram or entity relationship map. It's really critical for us to maintain that.

I’ll give you an example. When you say that a server is part of a network segment, and a server is connected to a switch in a particular way, it conveys certain meaning. And because of that meaning, you can now automatically apply policies, rules, patterns, and automatically exploit the meaning that you capture purely from that relationship. You can automate a lot of things just by knowing that.

If you stick to a pure index-and-search approach, you basically zero out a lot of this meaning and you lose information in the process.



If you stick to a pure index-and-search approach, you basically zero out a lot of this meaning and you lose information in the process. Then it's the operators who have to handcraft these queries to have to then reestablish this meaning that’s already out there. That can get very, very expensive pretty quickly.

Even at a fairly small scale, you'll find more and more people having to do things, and a pure index and search approach really scales with people, not as much with technology and automation. Index and search certainly adds a positive dimension to traditional IT monitoring tools -- but that alone is not the answer for the future.

Our approach to this big-data analytics problem is to take a hybrid approach. You need a flexible and extensible model that you start with as a foundation, that allows you to then apply meaning on top of that model to all the extended data that you capture and that can be kept in flat files and searched and indexed. You need that hybrid approach in order to get a handle on this problem.

Gardner: I suppose you also have to have your own architecture that can scale. So you're going to concepts like virtual appliances and scaling on-demand vis-à-vis clustering, and taking advantage of in-memory and streaming capabilities to manage this. Tell me why you need to think about the architecture that supports this big data capability in order for it to actually work in practical terms?

Kumar: You start with a fully virtualized architecture, because it allows you not only to scale easily. From a reach standpoint, with a virtualized architecture, you're able to reach into these multiple disparate environments and capture and analyze and bring that information in. So virtualized architecture is absolutely essentially for you to start with.

Auto correlate

Maybe more important is the ability for you to auto-correlate and analyze data, and that analysis has to be distributed analysis. Because whenever you have a big data problem, especially in something like IT management, you're not really sure of the scale of data that you need to analyze and you can never plan for it.

Let me put it another way. There is no server big enough to be able to analyze all of that. You'll always fall short of compute capacity because analysis requirements keep growing. Fundamentally, the architecture has to be one where the analysis is done in a distributed manner. It's easy to add compute capacity by scaling horizontally. Your architecture fits how computing models are evolving over the long run. So there are a lot of synergies to be exploited here by having a distributed analytics framework.

Think of it as applying a MapReduce type of algorithm to IT management problems, so that you can do distributed analysis, and the analysis is highly granular or specific. In IT management problems, it's always about the specificity with which you analyze and detect a problem that makes all the difference between whether that product or the solution is useful for a customer or not.

Gardner: In order for us to meet our requirements around scale and speed, we really have to think about the support underneath these capabilities in a new way. It seems like, in a sense, architecture is destiny when it comes to the support and monitoring for these large volumes in this velocity of data.

Let's look at the other part of this. We talked about the big data, but in order for the solution to work, we're looking at CEP capabilities in an engine that can take that data and then work with it and analyze it for these events and these programmable events and looking for certain patterns.

A major advantage of distributed analytics is that you're freed from the scale-versus-richness trade-off, from the limits on the type of events you can process.



Now that we understand the architecture and why it's important, tell me why this engine brings you to a higher level and differentiates you in the field around the monitoring.

Kumar: A major advantage of distributed analytics is that you're freed from the scale-versus-richness trade-off, from the limits on the type of events you can process. If I wanted to do more complex events and process more complex events, it's a lot easier to add compute capacity by just simply adding VMs and scaling horizontally. That’s a big aspect of automating deep forensic analysis into the data that you're receiving.

I want to add a little bit more about the richness of CEP. It's not just around capturing data and massaging it or looking at it from different angles and events. When we say CEP, we mean it is advanced to the point where it starts to capture how people would actually rationalize and analyze a problem.

For example, the ability for people in a simple visual snapshot to connect three different data points or three events together and say that they're interrelated and they point to a specific problem.

The only way you can automate your monitoring systems end-to-end and get more of the human element out of it is when your CEP system is able to capture those nuances that people in the NOC and SOC would normally use to rationalize when they look at events. You not only look at a stream of events, you ask further questions and then determine the remedy.

No hard limits

To do this, you should have a rich data set to analyze, i.e. there shouldn’t be any hard limits placed on what data can participate in the analysis and you should have the flexibility to easily add new data sources or types of data. So it's very important for the architecture to be able to not only event on data that are is stored in in traditional models or well-defined relational models, but also event against data that’s typically serialized and indexed in flat file databases.

This hybrid approach basically breaks the logjam in terms of creating these systems that are very smart and that could substitute for people in terms of how they think and how they react to events that are manifested in the NOC. You are not bound to data in an inflexible vendor defined model. You can also bring in the more free-form data into the analytics domain and do deep and specific analysis with it.

Cloud and virtualization are also making this possible. Although they’ve introduced more complexity due to change frequency, distributed workloads etc., they’ve also introduced some structure into IT environments. An example here is the use of converged infrastructure (Cisco UCS, HP Blade Matrix) to build private-cloud environments. At least at the infrastructure level it introduces some order and predictability.

Gardner: All right, Mahesh, we've talked about the problem in the market, we have talked about high-level look at the solution and why you need to do things differently, and why having the right architecture to support that is important, but let's get into the results.

If you do this properly, if you leverage and exploit these newer methods in IT -- like big data, analytics, CEP, virtual appliances and clustered instances of workload and support, and when you apply all those to this problem about the fire hose of data coming out of IT systems, a comprehensive look at IT in this fashion -- what do you get? What's the payoff if you do this properly?

Their needs are really around managing security, performance and configurations. These are three interconnected metrics in a virtualized cloud environment.



Kumar: I want to answer this question from a customer standpoint. It is no surprise that our customers don’t come to us saying we have a big data problem, help us solve a big data problem, or we have a complex event problem.

Their needs are really around managing security, performance and configurations. These are three interconnected metrics in a virtualized cloud environment. You can't separate one from the other. And customers say they are so interconnected that they want these managed on a common platform. So they're really coming at it from a business-level or outcome-focused perspective.

What AccelOps does under the covers, is apply techniques such as big-data analysis, complex driven processing, etc., to then solve those problems for the customer. That is the key payoff -- that customer’s key concerns that I just mentioned are addressed in a unified and scalable manner.

An important factor for customer productivity and adoption is the product user-interface. It is not of much use if a product leverages these advanced techniques but makes the user interface complicated -- you end up with the same result as before. So we’ve designed a UI that’s very easy to use, requires one or two clicks to get the information you need; a UI-driven ability to compose rich events and event patterns. Our customers find this very valuable, as they do not need super-specialized skills to work with our product.

Gardner: What's important to think about when we mention your customers is not just applying this value to an enterprise environment. Increasingly the cloud, with the virtualization, the importance of managing performance to very high standards, these are also impacting the cloud providers, managed service providers (MSPs), and software-as-a-service (SaaS) providers.

Up and running

T
his sounds like an architecture, an approach and a solution that's going to really benefit them, because their bread and butter is about keeping all of the systems up and running and making sure that all their service level agreements (SLAs) and contracts are being managed and adhered to.

Just to be clear, we're talking about an approach for a fairly large cross-section of the modern computing world -- enterprises and many different stripes of what we consider as service providers.

Kumar: Service providers are a very significant market segment for us and they are some of our largest customers. The reason they like the architecture that we have, very clearly, is that it's scalable. They know that the architecture scales as their business scales.

They also know that they get both the performance management and the security management aspects in a single platform. They're actually able to differentiate their customer offerings compared to other MSPs that may not have both, because security becomes really critical.

For anyone wanting to outsource to an MSP, the first question or one of the first questions that they are going to ask, in addition to the SLAs, are how are you going to ensure security? So to have both of those options is absolutely critical.

Subscription based licensing, which we offer in addition to perpetual licensing, also fits well with the CSP/MSP business model.



The third piece really is the fact that our architecture is multi-tenant from day one. We're able to bring customers on board with a one-touch mechanism, where they can bring the customer online, apply the right types of policies, whether it's SLA policies or security polices, automatically in our product and completely segment the data from one customer to the other.

All of that capability was built into our products from day one. So we didn’t have to retrofit any of that. That’s something our cloud-service providers and managed service provider customers find very appealing in terms of adopting AccelOps products.

Subscription based licensing, which we offer in addition to perpetual licensing, also fits well with the CSP/MSP business model.

Gardner: All right. Let's introduce your products in a little bit more detail. We understand you have created a platform, an architecture, for doing these issues or solving these issues for these very intense types of environments, for these large customers, enterprises, and service providers. Tell us a little bit about your portfolio.

Key metrics

Kumar: What we've built is a platform that monitors data center performance, security, and configurations. The three key interconnected metrics in virtualized cloud environments. Most of our customers really want that combined and integrated platform. Some of them might choose to start with addressing security, but they soon bring in the performance management aspects into it also. And vice versa.

And we take a holistic cross-domain perspective -- we span server, storage, network, virtualization and applications.

What we've really built is a common consistent platform that addresses these problems of performance, security, and configurations, in a holistic manner and that’s the main thing that our customers buy from us today.

Gardner: It sounds as if we're doing business intelligence for IT. We really are getting to the point where we can have precise dashboards, and we are not just making inferences and guesses. We're not just doing Boolean searches on old or even faulty data.

We're really looking at the true data, the true picture in real-time, and therefore starting to do the analysis that I think can start driving productivity to even newer heights than we have been accustomed to. So is that the vision, business intelligence (BI) for IT?

As you add the number of VMs or devices, you simply cannot scale the management cost, in a linear fashion. You want to have continuously reducing management cost for every new VM added or new device introduced.



Kumar: I guess you could say that. To break it down, from an IT management and monitoring standpoint, it is on an ongoing basis to continuously reducing the per capita management costs. As you add the number of VMs or devices, you simply cannot scale the management cost, in a linear fashion. You want to have continuously reducing management cost for every new VM added or new device introduced.

The way you do that is obviously through automation and through a self-learning process, whereby as you continue to learn more and more about the behavior of your applications and infrastructure, you're able to start to easily codify more and more of those patterns and rules in the system, thereby taking sort of the human element out of it bit by bit.

What we have as a product and a platform is the ability for you to increase the return on investment (ROI) on the platform as you continue to use that platform day-to-day. You add more information and enrich the platform with more rules, more patterns, and complex events that you can detect and potentially take automated actions on in the future.

So we create a virtuous cycle, with our product returning higher and higher return on your investment with time. Whereas, in traditional products, scale and longevity have the opposite effect.

So that’s really our vision. How do you reduce the per capita management cost as the scale of the enterprises start to increase, and how do you increase more automation as one of the elements of reducing the management cost within IT?

Gardner: You have given people a path to start in on this, sort of a crawl-walk-run approach. Tell me how that works. I believe you have a trial download, an opportunity for people to try this out for free.

Free trial download

Kumar: Most of our customers start off with the free trial download. It’s a very simple process. Visit www.accelops.com/download and download a virtual appliance trial that you can install in your data center within your firewall very quickly and easily.

Getting started with the AccelOps product is pretty simple. You fire up the product and enter the credentials needed to access the devices to be monitored. We do most of it agentlessly, and so you just enter the credentials, the range that you want to discover and monitor, and that’s it. You get started that way and you hit Go.

The product then uses this information to determine what’s in the environment. It automatically establishes relationships between them, automatically applies the rules and policies that come out of the box with the product, and some basic thresholds that are already in the product that you can actually start measuring the results. Within a few hours of getting started, you'll have measurable results and trends and graphs and charts to look at and gain benefits from it.

That’s a very simple process, and I encourage all our listeners and readers to download our free trial software and try AccelOps.

Gardner: I also have to imagine that your comments a few moments ago about not being able to continue on the same trajectory when it comes to management is only going to accelerate the need to automate and find the intelligent rather than the hard or laborious way to solve this when we go to things like cloud and increased mobility of workers and distributed computing.

It’s about automation and distributed analytics and about getting very specific with the information that you have, so that you can make absolutely more predictable, 99.9 percent correct of decisions and do that in an automated manner.



So the trends are really in your favor. It seems that as we move toward cloud and mobile that at some point or another organizations will hit the wall and look for the automation alternative.

Kumar: It’s about automation and distributed analytics and about getting very specific with the information that you have, so that you can make absolutely more predictable, 99.9 percent correct of decisions and do that in an automated manner. The only way you can do that is if you have a platform that’s rich enough and scalable and that allows you to then reach that ultimate goal of automating most of the management of these diverse and disparate environments.

That’s something that's sorely lacking in products today. As you said, it's all brute-force today. What we have built is a very elegant, easy-to-use way of managing your IT problems, whether it’s from a security standpoint, performance management standpoint, or configuration standpoint, in a single integrated platform. That's extremely appealing for our customers, both enterprise and cloud-service providers.

I also want to take this opportunity to encourage those of your listening or reading this podcast to come meet our team at the 2011 Gartner Data Center Conference, Dec. 5-9, at Booth 49 and learn more. AccelOps is a silver sponsor of the conference.

Gardner: I am afraid we will have to leave it there. You've been listening to a sponsored BriefingsDirect podcast. We've been talking about how new data and analysis approaches from AccelOps are attaining significantly improved IT operations monitoring as well as stronger security.

I'd like to thank our guest, Mahesh Kumar, Vice President of Marketing at AccelOps. Thank so much, Mahesh.

Kumar: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: AccelOps.

Connect with AccelOps: Linkedin, Twitter, Facebook, RSS.

Transcript of a BriefingsDirect podcast on how enterprises can benefit from capturing and analyzing systems data to improve IT management in real-time. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in: