Wednesday, March 22, 2017

Logicalis Chief Technologist Defines the New Ideology of Hybrid IT

Transcript of a discussion on how Information Technology (IT) organizations are shifting to become strategists and service providers and thereby work toward adoption of a hybrid IT environment. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer Podcast series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation. Stay with us now to learn how agile businesses are fending off disruption in favor of innovation.

Gardner
Our next thought leader interview explores how digital disruption demands that businesses develop a new ideology of hybrid IT. We'll hear how such trends as Internet of things (IoT), distributed IT, data sovereignty requirements, and pervasive security concerns are combining to challenge how IT operates.

We'll learn how IT organizations are shifting to become strategists and internal service providers, and how that supports adoption of hybrid IT. We will also delve into how converged and hyper-converged infrastructures (HCI) provide an on-ramp to hybrid cloud strategies and adoption. 

To help us define a new ideology for hybrid IT, we're joined by Neil Thurston, Chief Technologist for the Hybrid IT Practice at Logicalis Group in the UK. Welcome, Neil.

Thurston
Neil Thurston: Hi, Dana. Thank you very much for having us. 

Gardner: Why don’t we start at this notion of a new ideology? What’s wrong with the old ideology of IT?

Thurston: Good question. What we are facing now is what we've done for an awfully long time versus what the emerging large hyper-scale providers with cloud, for example, have been developing. 

The two clashing ideologies that we have are: Either we continue with the technologies that we've been developing (and the skills and processes that we've developed in-house) and push those out to the cloud, or we adopt the alternative ideology. If we think about things such as Microsoft Azure and the forthcoming Azure Stack, which means that those technologies are pulled from the cloud into our on-premise environments. The two opposing ideologies we have are: Do we push out or do we pull in?

The technologies allow us to operate in a true hybrid environment. By that we mean not having isolated islands of innovation anymore. It's not just standing things up in hybrid hyper-scale environments, or clouds, where you have specific skills, resources, teams and tools to manage those things. Moving forward, we want to have consistency in operations, security, and automation. We want to have a single toolset or control plane that we can put across all of our workloads and data, regardless of where they happen to reside.
Solutions for
Hybrid and Private Cloud

IT Infrastructure
Gardner: One of the things I encounter, Neil, when I talk to Chief information officers (CIO)s, is their concern that as we move to a hybrid environment, they're going to be left with having the responsibility -- but without the authority to control those different elements. Is there some truth to that?

Thurston: I can certainly see where that viewpoint comes from. A lot of our own customers reflect that viewpoint. We're seeing a lot of organizations, where they may have dabbled and cherry-picked from service management and from practices such as ITIL. We're now seeing more pragmatic IT service management (ITSM) frameworks, such as IT4IT, coming to the fore. These are really more about pushing that responsibility level up the stack. 

You're right in that people are becoming more of a supply-chain manager than the actual manager of the hardware, facilities, and everything else within IT. There definitely is a shift toward that, but there are also frameworks coming into play that allow you to deal with that as well. 

Gardner: The notion of shadow IT becoming distributed IT was once a very dangerous and worrisome thing. Now, it has to be embraced and perhaps is positive. Why should we view it as positive?

Out of the shadow

Thurston: The term shadow IT is controversial. Within our organization, we prefer to say that the shadow IT users are the digital users of the business. You have traditional IT users, but you also have digital users. I don’t really think it’s a shadow IT thing; it's that they're a totally different use-case for service consumption. 

But you're right. They definitely need to be serviced by the organizations. They deserve to have the same level of services applied, the same governance, security, and everything else applied to them. 

Gardner: It seems that the new ideology of hybrid IT is about getting the right mix and keeping that mix of elements under some sort of control. Maybe it's simply on the basis of management, or an automation framework of some sort, but you allow that to evolve and see what happens. We don't know what this is going to be like in five years. 

Thurston: There are two pieces of the puzzle. There's the workload, the actual applications and services, and then there's the data. There is more importance placed on the data. Data is the new commodity, the new cash, in our industry. Data is the thing you want to protect. 

The actual workload and service consumption piece is the commodity piece that could be worked out. What you have to do moving forward is protect your data, but you can take more of a brokering approach to the actual workloads. If you can reach that abstraction, then you're fit-for-purpose and moving forward into the hybrid IT world.

Gardner: It’s almost like we're controlling the meta-processes over that abstraction without necessarily having full control of what goes on at those lower abstractions, but that might not be a bad thing. 

Thurston: I have a very quick use-case. A customer of ours for the last five years has been using Amazon Web Services (AWS), and they were getting the feeling they were getting tied into the platform. Their developers over the years had been using more and more of the platform services and they weren’t able to make all that code portable and take it elsewhere. 

This year, they made the transformation and they've decided to develop against Cloud Foundry, an open Platform as a Service (PaaS). They have instances of Cloud Foundry across Pivotal on AWS, also across IBM Bluemix, and across other cloud providers. So, they're now coding once -- and deploying anywhere for the compute workload side. Then, they have a separate data fabric that regulates the data underneath. There are emerging new architectures that help you to deal with this.

Gardner: It's interesting that you just described an ecosystem approach. You're no longer seeing as many organizations that are supplier “XYZ” shops, where 80 or 90 percent of everything would be one brand name. You just described a highly heterogeneous environment. 

Thurston: People have used cloud services, and hyper-scale of cloud services, and have specific use-cases, typically the more temporary types of workloads. Even companies born in the cloud, such as Uber and Netflix, reach those inflection points, where actually going to on-premise was far cheaper. It made compliance to regulations far easier. People are slowly realizing, through what other people are doing -- and also from their own good or bad experiences -- that hybrid IT really is the way forward.

Gardner: And the good news is that if you do bring it back from the cloud or re-factor what you're doing on-premises, there are some fantastic new infrastructure technologies. We are talking about converged infrastructure, hyper-converged infrastructure, software-defined data center (SDDC). At recent HPE Discover events, we've seen more  memory-driven computing, and we’re seeing some interesting new powerful speeds and feeds along those lines. 

So, on the economics and the price-performance equation, the public cloud is good for certain things, but there's some great attraction to some of these new technologies on-premises. Is that the mix that you are trying to help your clients factor?
Solutions for
Hybrid and Private Cloud

IT Infrastructure
Thurston: Absolutely. We're pretty much in parallel with the way that HPE approaches things, with the right mix. We see that in certain industries there's always going to be things like regulated data. Regulated data is really hard to control in a public-cloud space, where you have no real idea where things are. You can’t easily order them physically. 

Having on-premise provides you with that far easier route to regulation, and today’s technologies, the hyper-converged platforms, for example, allow us to really condense the footprint. We don’t need these massive data centers anymore.

We're working with customers where we have taken 10 or 12 racks worth of legacy classic equipment and with a new hyper-converged, we put in less than two racks worth of equipment. So, the actual operational footprint of facilities cost is much less. It makes it a far more compelling argument for those types of use-cases than using public cloud.

Gardner: Then you can mirror that small footprint data center into a geography, if you need it for compliance requirements, or you could mirror it for reasons of business continuity and backup and recovery. So, there are lots of very interesting choices. 

Neil, tell us a little bit about Logicalis. I want to make sure all of our listeners and readers understand who you are and how you fit into helping organizations make these very large strategic decisions.

Cloud-first is not cloud-only 

Thurston: Logicalis is essentially a digital business enabler. We take technologies across multiple areas and help our customers become digital-ready. We cover a whole breadth of technologies. 

I look at the hybrid IT practice, but we also have the more digital-focused parts of our business, such as collaboration and analytics. The hybrid IT side is where we're working with our customers through the pains that they have, through the decisions that they have to make, and very often board-level decisions are made where you have to have a "cloud-first" strategy.

It's unfortunate when that gets interpreted as "cloud-only." There is some process to go through for cloud readiness, because some applications are not going to be fit for the cloud. Some cannot be virtualized; most can, but there are always regulations. Certainly, in Europe at present there is a lot of fear, uncertainty, and doubt (FUD) in the market, and there is a lot of uncertainty around European Union General Data Protection Regulation (EU GDPR), for example, and overall data protection.

There are a lot of reasons why we have to take a bit more of a factored, measured approach to looking at where workloads and data are best placed moving forward, and the models are that you want to operate in.

Gardner: I think HPE agrees with you. Their strategy is to put more emphasis on things like high performance computing (HPC), the workloads of which won't likely be virtualized, that won't work well in a public cloud, one-size-fits-all environment. It's also factoring in the importance of the edge, even thinking about putting the equivalent of a data center on the edge for demands around information for IoT, and analytics and data requirements there as well as the compute requirements.

What's the relationship between HPE and Logicalis? How do you operate as an alliance or as a partnership?

Thurston: We have a very strong partnership. We have a 15- or 16-year relationship with HPE in the UK. As everyone else did, we started out selling service and storage, but we've taken the journey with HPE and with our customers. The great thing about HPE is that they've always managed to innovate, they have always managed to keep up with the curve, and that's really enabled us to work with our customers and decide what the right technologies are. Today, this allows us to work out the right mix for our customers of on-premise and off-premise equipment,

HPE is ahead of the curve in various technologies in our area, and one of those includes HPE Synergy. We're now talking with a lot of our customers about the next curve that’s coming with infrastructure-as-code, and how we can leverage what the possible benefits and outcomes will be of enabling that technology.

The on-ramp to that is that we're using hyper-converged technologies to virtualize all the workloads and make them portable, so that we can then abstract them and place them either within platform services or within cloud platforms, as necessary, as dictated by whatever our security policies dictate.
Solutions for
Hybrid and Private Cloud

IT Infrastructure
Gardner: Getting back to this ideology of hybrid IT, when you have disparate workloads and you're taking advantage of these benefits of platform choice, location, model and so forth, it seems that we're still confronted with that issue of having the responsibility without the authority. Is there an approach that HPE is taking with management, perhaps thinking about HPE OneView that is anticipating that need and maybe adding some value there?

Thurston: With the HPE toolsets, we're able to set things such as policies. Today, we're at Platform 2.5 really, and the inflection that takes us on to the third platform is the policy automation. This is one part that HPE OneView allows us to do across the board. 

It’s policies on our storage resources, policies on our compute resources, and again, policies on non-technology, so quotas on public cloud, and those types of things. It enables us to leverage the software-defined infrastructure that we have underneath to set the policies that define the operational windows that we want our infrastructure to work in, the decisions it’s allowed to make itself within that, and we'll just let it go. We really want to take IT from "high touch" to "low touch," that we can do today with policy, and potentially, in the future with infrastructure as code, to "no touch." 

Gardner: As you say, we are at Platform 2.5, heading rapidly towards Platform 3. Do you have some examples you can point to, customers of yours and HPE’s, and describe how a hybrid IT environment translates into enablement and business benefits and perhaps even economic benefits? 

Time is money

Thurston: The University of Wolverhampton is one of our customers, where we've taken this journey with them with HPE, with hyper-converged platforms, and created a hybrid environment for them. 

Today, the hybrid environment means that we're wholly virtualized on HPE hyper-converged platform. We've rolled the solutions out across their campus. Where we normally would have had disparate clouds, we now have a single plane controlled by OneView that enables them to balance all the workloads across the whole campus, all of their departments. It’s bringing them new capabilities, such as agility, so they can now react a lot quicker. 

Before, a lot of the departments were coming to them with requirements, but those requirements were taking 12 to 16 weeks to actually fulfill. Now, we can do these things from the technology perspective within hours, and the whole process within days. We're talking a factor of 10 here in reduction of time to actually produce services. 

As they say, success breeds success. Once someone sees what the other department is able to do, that generates more questions, more requests, and it becomes a self-fulfilling prophecy. 

We're working with them to enable the next phase of this project. That is to leverage the hyper-scale of public clouds, but again, in a more controlled environment. Today, they're used to the platform. That’s all embedded in. They are reaping the benefits of that from mainly an agility perspective. From an operational perspective, they are reaping the benefits of vastly reduced system, and more importantly, storage administration. 

Storage administrations have had 85 percent savings on their time required to administer the storage by having it wholly virtualized, which is fantastic from their perspective. It means they can concentrate more on developing the next phase, which is embracing or taking this ideology out to the public cloud.

Gardner: Let's look to the future before we wrap this up. What would you like to see, not necessarily from HPE, but what can the vendors, the suppliers, or the public-cloud providers do to help you make that hybrid IT equation work better? 

Thurston: A lot of our mainstream customers always think that they're late into adoption, but typically, they're late into adoption because they're waiting to see what becomes either a de-facto standard that is winning in the market, or they're looking for bodies to create standards. Interoperability between platforms and standards is really the key to driving better adoption.

Today with AWS, Azure, etc., there's no real compatibility that we can take from them. We can only abstract things further up. This is why I think platform as a service, things like Cloud Foundry and open platforms will, for those forward thinkers who want to adopt the hybrid IT, become the future platforms of choice.

Gardner: It sounds like what you are asking for is a multi-cloud set of options that actually works and is attainable. 

Thurston: It’s like networking, with Ethernet. We have had a standard, everyone adheres to it, and it’s a commodity. Everyone says public cloud is a commodity. It is, but unfortunately what we don’t have is the interoperability of the other standards, such as we find in networking. That’s what we need to drive better adoption, moving forward.

Gardner: I'm afraid we will have to leave it there. We've been exploring how digital disruption demands that businesses develop a new ideology of hybrid IT. And we've heard how such trends as IoT and distributed IT and data sovereignty requirements, as well as pervasive security concerns, are combining to challenge how IT operates. 

We’ve had the opportunity to learn how IT organizations are shifting to become strategists and service providers and work toward adoption of a true hybrid IT environment. 

Please join me in thanking our guest, Neil Thurston, Chief Technologist for the Hybrid IT Practice at the Logicalis Group in the UK. Thank you so much, Neil.

Thurston: Thank you, very much. Thanks for having me. 

Gardner: And thanks as well to our audience for joining us for this BriefingsDirect Voice of the Customer digital transformation discussion.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored interviews. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how IT organizations are shifting to become strategists and service providers and work towards adoption of a true hybrid IT environment. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.

You may also be interested in:

Tuesday, March 07, 2017

Converged IoT Systems: Bringing the Data Center to the Edge of Everything

Transcript of a discussion on the rapidly evolving architectural shift of moving advanced information technology (IT) capabilities to the edge to support Internet of Things (IoT) requirements for operational integrity benefits.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation. Stay with us now to learn how agile businesses are fending off disruption -- in favor of innovation.
Gardner

Our next thought leadership panel discussion explores the rapidly evolving architectural shift of moving advanced IT capabilities to the edge to support IoT requirements.

The demands of data processing, real-time analytics, and platform efficiency at the intercept of IoT and business benefits have forced new technology approaches. We'll now hear how converged systems and high-performance data analysis platforms are bringing -- in essence -- the data center to the operational technology (OT) edge.

To hear more about the latest capabilities in gaining unprecedented measurements and operational insights where they’re needed most, please join me in welcoming Phil McRell, General Manager of the IoT Consortia at PTC. Welcome, Phil.

McRell
Phil McRell: Great to be here. Thanks, Dana.

Gardner: We're also here with Gavin Hill, IoT Marketing Engineer for Northern Europe at National Instruments (NI) in London. Welcome, Gavin.

Gavin Hill: Hi, Dana. Thanks very much for inviting us.

Gardner: Good to have you with us. Olivier Frank also joins us. He is Senior Director of Worldwide Business Development and Sales for Edgeline IoT Systems at Hewlett Packard Enterprise (HPE). Welcome, Olivier.

Olivier Frank: Thank you, Dana. Great to be here with you.

Gardner: Gentlemen, let's start at a high level. What's driving this need for a different approach to computing when we think about IoT and we think about the “edge” of organizations? Why is this becoming such a hot issue?

McRell: There are several drivers, but the most interesting one is economics. In the past, the costs that would have been required to take an operational site -- a mine, a refinery, or a factory -- and do serious predictive analysis, meant you would have to spend more money than you would get back.

For very high-value assets -- assets that are millions or tens of millions of dollars -- you probably do have some systems in place in these facilities. But once you get a little bit lower in the asset class, there really isn’t a return on investment (ROI) available. What we're seeing now is that's all changing based on the type of technology available.

Gardner: So, in essence, we have this whole untapped tier of technologies that we haven't been able to get a machine-to-machine (M2M) benefit from for gathering information -- or the next stage, which is analyzing that information. How big an opportunity is this? Is this a step change, or is this a minor incremental change? Why is this economically a big deal, Olivier?
Frank

Frank: We're talking about Industry 4.0, the fourth generation of change -- after steam, after the Internet, after the cloud, and now this application of IoT to the industrial world. It’s changing at multiple levels. It’s what's happening within the factories and within this ecosystem of suppliers to the manufacturers, and the interaction with consumers of those suppliers and customers. There's connectivity to those different parties that we can then put together.

While our customers have been doing process automation for 40 years, what we're doing together is unleashing the IT standardization, taking technologies that were in the data centers and applying them to the world of process automation, or opening up.

The analogy is what happened when mainframes were challenged by mini computers and then by PCs. It's now open architecture in a world that has been closed.

Gardner: Phil mentioned ROI, Gavin. What is it about the technology price points and capabilities that have come down to the point where it makes sense now to go down to this lower tier of devices and start gathering information?


Hill
Hill: There are two pieces to that. The first one is that we're seeing that understanding more about the IoT world is more valuable than we thought. McKinsey Global Institute did a study that said that by about 2025 we're going to be in a situation where IoT in the factory space is going to be worth somewhere between $1.2 trillion and $3.7 trillion. That says a lot.

The second piece is that we're at a stage where we can make technology at a much lower price point. We can put that onto the assets that we have in these industrial environments quite cheaply.

Then, you deal with the real big value, the data. All three of us are quite good at getting the value from our own respective areas of expertise.

Look at someone that we've worked with, Jaguar Land Rover. In their production sites, in their power train facilities, they were at a stage where they created an awful lot of data but didn't do anything with it. About 90 percent of their data wasn't being used for anything. It doesn't matter how many sensors you put on something. If you can't do anything with the data, it's completely useless.

They have been using techniques similar to what we've been doing in our collaborative efforts to gain insight from that data. Now, they're at a stage where probably 90 percent of their data is usable, and that's the big change.

Collaboration is key

Gardner: Let's learn more about your organizations and how you're working collaboratively, as you mentioned, before we get back into understanding how to go about architecting properly for IoT benefits. Phil, tell us about PTC. I understand you won an award in Barcelona recently.

McRell: That was a collaboration that our three organizations did with a pump and valve manufacturer, Flowserve. As Gavin was explaining, there was a lot of learning that had to be done upfront about what kind of sensors you need and what kind of signals you need off those sensors to come up with accurate predictions.

When we collaborate, we rely heavily on NI for their scientists and engineers to provide their expertise. We really need to consume digital data. We can't do anything with analog signals and we don't have the expertise to understand what kind of signals we need. When we obtain that, then with HPE, we can economically crunch that data, provide those predictions, and provide that optimization, because of HPE's hardware that now can live happily in those production environments.

Gardner: Tell us about PTC specifically; what does your organization do?

McRell: For IoT, we have a complete end-to-end platform that allows everything from the data acquisition gateway with NI all the way up to machine learning, augmented reality, dashboards, and mashups, any sort of interface that might be needed for people or other systems to interact.

In an operational setting, there may be one, two, or dozens of different sources of information. You may have information coming from the programmable logic controllers (PLCs) in a factory and you may have things coming from a Manufacturing Execution System (MES) or an Enterprise Resource Planning (ERP) system. There are all kinds of possible sources. We take that, orchestrate the logic, and then we make that available for human decision-making or to feed into another system.

Gardner: So the applications that PTC is developing are relying upon platforms and the extension of the data center down to the edge. Olivier, tell us about Edgeline and how that fits into this?
Explore
HPE's Edgeline

IoT Systems
Frank: We came up with this idea of leveraging the enterprise computing excellence that is our DNA within HPE. As our CEO said, we want to be the IT in the IoT.

According to IDC, 40 percent of the IoT computing will happen at the edge. Just to clarify, it’s not an opposition between the edge and the hybrid IT that we have in HPE; it’s actually a continuum. You need to bring some of the workloads to the edge. It's this notion of time of insight and time of action. The closer you are to what you're measuring, the more real-time you are.

We came up with this idea. What if we could bring the depth of computing we have in the data center in this sub-second environment, where I need to read this intelligent data created by my two partners here, but also, actuate them and do things with them?

Take the example of an electrical short circuit that for some reason caught fire. You don’t want to send the data to the cloud; you want to take immediate action. This is the notion of real-time, immediate action.

We take the deep compute. We integrate the connectivity with NI. We're the first platform that has integrated an industry standard called PXI, which allows NI to integrate the great portfolio of sensors and acquisition and analog-to-digital conversion technologies into our systems.

Finally, we bring enterprise manageability. Since we have proliferation of systems, system management at the edge becomes a problem. So, we bring our award-winning and millions-of-licenses sold our Integrated Lights-Out (iLO) that we sell in all our ProLiant servers, and we bring that technology at the edge as well.

Gardner: We have the computing depth from HPE, we have insightful analytics and applications from PTC, what does NI bring to the table? Describe the company for us, Gavin?

Working smarter

Hill: As a company, NI is about a $1.2 billion company worldwide. We get involved in an awful lot of industries. But in the IoT space, where we see ourselves fitting within this collaboration with PTC and HPE, is our ability to make a lot of machines smarter.

There are already some sensors on assets, machines, pumps, whatever they may be on the factory floor, but for older or potentially even some newer devices, there are not natively all the sensors that you need to be able to make really good decisions based on that data. To be able to feed in to the PTC systems, the HPE systems, you need to have the right type of data to start off with.

We have the data acquisition and control units that allow us to take that data in, but then do something smart with it. Using something like our CompactRIO System, or as you described, using the PXI platform with the Edgeline products, we can add a certain level of understanding and just a smart nature to these potentially dumb devices. It allows us not only to take in signals, but also potentially control the systems as well.

We not only have some great information from PTC that lets us know when something is going to fail, but we could potentially use their data and their information to allow us to, let’s say, decide to run a pump at half load for a little bit longer. That means that we could get a maintenance engineer out to an oil rig in an appropriate time to fix it before it runs to failure. We have the ability to control as well as to read in.

The other piece of that is that sensor data is great. We like to be as open as possible in taking from any sensor vendor, any sensor provider, but you want to be able to find the needle in the haystack there. We do feature extraction to try and make sure that we give the important pieces of digital data back to PTC, so that can be processed by the HPE Edgeline system as well.
Explore
HPE's Edgeline

IoT Systems

Frank: This is fundamental. Capturing the right data is an art and a science and that’s really what NI brings, because you don’t want to capture noise; it’s proliferation of data. That’s a unique expertise that we're very glad to integrate in the partnership.

Gardner: We certainly understand the big benefit of IoT extending what people have done with operational efficiency over the years. We now know that we have the technical capabilities to do this at an acceptable price point. But what are the obstacles, what are the challenges that organizations still have in creating a true data-driven edge, an IoT rich environment, Phil?

Economic expertise

McRell: That’s why we're together in this consortium. The biggest obstacle is that because there are so many different requirements for different types of technology and expertise, people can become overwhelmed. They'll spend months or years trying to figure this out. We come to the table with end-to-end capability from sensors and strategy and everything in between, pre-integrated at an economical price point.

Speed is important. Many of these organizations are seeing the future, where they have to be fast enough to change their business model. For instance, some OEM discrete manufacturers are going to have to move pretty quickly from just offering product to offering service. If somebody is charging $50 million for capital equipment, and their competitor is charging $10 million a year and the service level is actually better because they are much smarter about what those assets are doing, the $50 million guy is going to go out of business.

We come to the table with the ability to come and quickly get that factory, get those assets smart and connected, make sure the right people, parts, and processes are brought to bear at exactly the right time. That drives all the things people are looking for -- the up-time, the safety, the yield,  and performance of that facility. It comes down to the challenge, if you don't have all the right parties together with that technology and expertise, you can very easily get stuck on something that takes a very long time to unravel.

Gardner: That’s very interesting when you move from a Capital Expenditure (CAPEX) to an Operational Expenditure (OPEX) mentality. Every little bit of that margin goes to your bottom line and therefore you're highly incentivized to look for whole new categories of ways to improve process efficiency.

Any other hurdles, Olivier, that you're trying to combat effectively with the consortium?

Frank: The biggest hurdle is the level of complexity, and our customers don't know where to start. So, the promise of us working together is really to show the value of this kind of open architecture injected into a 40-year-old process automation infrastructure and demonstrate, as we did yesterday with our robot powered by our HPE Edgeline is this idea that I can show immediate value to the plant manager, to the quality manager, to the operation manager using the data that resides in that factory already, and that 70 percent or more is unused. That’s the value.

So how do you get that quickly and simply? That’s what we're working to solve so that our customers can enjoy the benefit of the technology faster and faster.

Bridge between OT and IT

Gardner: Now, this is a technology implementation, but it’s done in a category of the organization that might not think of IT in the same way as the business side -- back office applications and data processing. Is the challenge for many organizations a cultural one, where the IT organization doesn't necessarily know and understand this operational efficiency equation and vice versa, and how are we bridging that?

Hill: I'm probably going to give you the high-level end from the operational technology (OT) side as well. These guys will definitely have more input from their own domain of expertise. But, that these guys have that piece of information for that part that they know well is exactly why this collaboration works really well.

You have situations with the idea of the IoT, where a lot of people stood up and said, "Yeah, I can provide a solution. I have the answer," but without having a plan -- never mind a solution. But we've done a really good job of understanding that we can do one part of this system, this solution, really well, and if we partner with the people who are really good in the other aspects, we provide real solutions to customers. I don't think anyone can compete with us with at this stage, and that is exactly why we're in this situation.

Frank: Actually, the biggest hurdle is more on the OT side, not really relying on the IT of the company. For many of our customers, the factory's a silo. At HPE, we haven't been selling too much to that environment. That’s also why, when working as a consortium, it’s important to get to the right audience, which is in the factory. We also bring our IT expertise, especially in the areas of security, because at the moment, when you put an IT device in an OT environment, you potentially have problems that you didn’t have before.

We're living in a closed world, and now the value is to open up. Bringing our security expertise, our managed service, our services competencies to that problem is very important.

Speed and safety out in the open

Hill: There was a really interesting piece in the HPE Discover keynote in December, when HPE Aruba started to talk about how they had an issue when they started bringing conferencing and technology out, and then suddenly everything wanted to be wireless. They said, "Oh, there's a bit of a security issue here now, isn’t there? Everything is out there."

We can see what HPE has contributed to helping them from that side. What we're talking about here on the OT side is a similar state from the security aspect, just a little bit further along in the timeline, and we are trying to work on that as well. Again, we have HPE here and they have a lot of experience in similar transformations.

Frank: At HPE, as you know, we have our Data Center and Hybrid Cloud Group and then we have our Aruba Group. When we do OT or our Industrial IoT, we bring the combination of those skills.

For example, in security, we have HPE Aruba ClearPass technology that’s going to secure the industrial equipment back to the network and then bring in wireless, which will enable the augmented-reality use cases that we showed onstage yesterday. It’s a phased approach, but you see the power of bringing ubiquitous connectivity into the factory, which is a challenge in itself, and then securely connecting the IT systems to this OT equipment, and you understand better the kind of the phases and the challenges of bringing the technology to life for our customers.

McRell: It’s important to think about some of these operational environments. Imagine a refinery the size of a small city and having to make sure that you have the right kind of wireless signal that’s going to make it through all that piping and all those fluids, and everything is going to work properly. There's a lot of expertise, a lot of technology, that we rely on from HPE to make that possible. That’s just one slice of that stack where you can really get gummed up if you don’t have all the right capabilities at the table right from the beginning. 

Gardner: We've also put this in the context of IoT not at the edge isolated, but in the context of hybrid computing and taking advantage of what the cloud can offer. It seems to me that there's also a new role here for a constituency to be brought to the table, and that’s the data scientists in the organization, a new trove of data, elevated abstraction of analytics. How is that progressing? Are we seeing the beginnings of taking IoT data and integrating that, joining that, analyzing that, in the context of data from other aspects of the company or even external datasets?

McRell: There are a couple of levels. It’s important to understand that when we talk about the economics, one of the things that has changed quite a bit is that you can actually go in, get assets connected, and do what we call anomaly detection, pretty simplistic machine learning, but nonetheless, it’s a machine-learning capability.

In some cases, we can get that going in hours. That’s a ground zero type capability. Over time, as you learn about a line with multiple assets, about how all these function together, you learn how the entire facility functions, and then you compare that across multiple facilities, at some point, you're not going to be at the edge anymore. You're going to be doing a systems type analytics, and that’s different and combined.

At that point, you're talking about looking across weeks, months, years. You're going to go into a lot of your back-end and maybe some of your IT systems to do some of that analysis. There's a spectrum that goes back down to the original idea of simply looking for something to go wrong on a particular asset.

The distinction I'm making here is that, in the past, you would have to get a team of data scientists to figure out almost asset by asset how to create the models and iterate on that. That's a lengthy process in and of itself. Today, at that ground-zero level, that’s essentially automated. You don't need a data scientist to get that set up. At some point, as you go across many different systems and long spaces of time, you're going to pull in additional sources and you will get data scientists involved to do some pretty in-depth stuff, but you actually can get started fairly quickly without that work.

The power of partnership

Frank: To echo what Phil just said, in HPE we're talking about the tri-hybrid architecture -- the edge, so let’s say close to the things; the data center; and then the cloud, which would be a data center that you don’t know where it is. It's kind of these three dimensions.

The great thing partnering with PTC is that the ThingWorx platform, the same platform, can run in any of those three locations. That’s the beauty of our HPE Edgeline architecture. You don't need to modify anything. The same thing works, whether we're in the cloud, in the data center, or on the Edgeline.

To your point about the data scientists, it's time-to-insight. There are things you want to do immediately, and as Phil pointed out, the notion of anomaly detection that we're demonstrating on the show floor is understanding those nominal parameters after a few hours of running your thing, and simply detecting something going off normal. That doesn't require data scientists. That takes us into the ThingWorx platform.
Explore
HPE's Edgeline

IoT Systems

But then, to the industrial processes, we're involving systems integration partners and using our own knowledge to bring to the mix along with our customers, because they own the intelligence of their data. That’s where it creates a very powerful solution.

Gardner: I suppose another benefit that the IT organization can bring to this is process automation and extension. If you're able to understand what's going on in the device, not only would you need to think about how to fix that device at the right time -- not too soon, not too late -- but you might want to look into the inventory of the part, or you might want to extend it to the supply chain if that inventory is missing, or you might want to analyze the correct way to get that part at the lowest price or under the RFP process. Are we starting to also see IT as a systems integrator or in a process integrator role so that the efficiency can extend deeply into the entire business process?

McRell: It's interesting to see how this stuff plays out. Once you start to understand in your facility -- or maybe it’s not your facility, maybe you are servicing someone's facility -- what kind of inventory should you have on hand, what should you have globally in a multi-tier, multi-echelon system, it opens up a lot of possibilities.

Today PTC provides a lot of network visibility, a lot of spare-parts inventory, management, and systems, but there's a limit to what these algorithms can do. They're really the best that’s possible at this point, except when you now have everything connected. That feedback loop allows you to modify all your expectations in real time, get things on the move proactively so the right person and parts, process, kit, all show up at the right time.

Then, you have augmented reality and other tools, so that maybe somebody hasn't done this service procedure before, maybe they've never seen these parts before, but they have a guided walk-through and have everything showing up all nice and neat the day of, without anybody having to actually figure that out. That's a big set of improvements that can really change the economics of how these facilities run.

Connecting the data

Gardner: Any other thoughts on process integration?

Frank: Again, the premise behind industrial IoT is indeed, as you're pointing out, connecting the consumer, the supplier, and the manufacturer. That’s why you have also the emergence of a low-power communication layer, like LoRa or Sigfox, that really can bring these millions of connected devices together and inject them into the systems that we're creating.

Hill: Just from the conversation, I know that we’re all really passionate about this. IoT and the industrial IoT is really just a great topic for us. It's so much bigger than what we're talking about. You've talked a little bit about security, you have asked us about the cloud, you have asked us about the integration of the inventory and to the production side, and it is so much bigger than what we are talking about now.

We probably could have twice this long of a conversation on any one of these topics and still never get halfway to the end of it. It's a really exciting place to be right now. And the really interesting thing that I think all of us are now realizing, the way that we have made advancements as a partnership as well is that you don't know what you don't know. A lot of companies are waking up to that as well, and we're using our collaborations to allow us to know what we don’t know

Frank: Which is why speed is so important. We can theorize and spend a lot of time in R&D, but the reality is, bring those systems to our customers, and we learn new use cases and new ways to make the technology advance.

Hill: The way that technology has gone, no one releases a product anymore -- that’s the finished piece, and that is going to stay there for 20, 30 years. That’s not what happens. Products and services are being provided that get constantly updated. How many times a week does your phone update with different pieces of firmware, the app is being updated. You have to be able to change and take the data that you get to adjust everything that’s going on. Otherwise you will not stay ahead of the market.

And that’s exactly what Phil described earlier when he was talking about whether you sell a product or a service that goes alongside a set of products. For me, one of the biggest things is that constant innovation -- where we are going. And we've changed. We were in kind of a linear motion of progression. In the last little while, we've seen a huge amount of exponential growth in these areas.

We had a video at the end of the London HPE Discover keynote, where it was one of HPE’s pieces of what the future could be. We looked at it and thought it was quite funny. There was an automated suitcase that would follow you after you left the airport. I started to laugh at that, but then I took a second and I realized that maybe that’s not as ridiculous as it sounds, because we as humans think linearly. That’s incumbent upon us. But if the technology is changing in an exponential way, that means that we physically cannot ignore some of the most ridiculous ideas that are out there, because that’s what’s going to change the industry.

And even by having that video there and by seeing what PTC is doing with the development that they have and what we ourselves are doing in trying out different industries and different applications, we see three companies that are constantly looking through what might happen next and are ready to pounce on that to take advantage of it, each with their own expertise.

Gardner: We're just about out of time, but I'd like to hear a couple of ridiculous examples -- pushing the envelope of what we can do with these sorts of technologies now. We don’t have much time, so less than a minute each, if you can each come up perhaps with one example, named or unnamed, that might have seemed ridiculous at the time, but in hindsight has proven to be quite beneficial and been productive. Phil?

McRell: You can do this as engineering with us, you can do this in service, but we've been talking a lot about manufacturing. In a manufacturing journey, the opportunity, as Gavin and Olivier are describing here, is at the level of what happened between pre- and post-electricity. How fast things will run, the quality at which they will produce products, and then therefore the business model that now you can have because of that capability. These are profound changes. You will see up-times in some of the largest factories in the world go up double digits. You will see lines run multiple times faster over time.

These are things that, if you just walked in today and walked in in a couple of years to some of the people who run the hardest, it would be really hard to believe what your eyes are seeing at that point, just like somebody who was around before factories had electricity would be astounded by what they see today.

Back to the Future

Gardner: One of the biggest issues at the most macro level in economics is the fact that productivity has plateaued for the past 10 or 15 years. People want to get back to what productivity was -- 3 or 4 percent a year. This sounds like it might be a big part of getting there. Olivier, an example?

Frank: Well, an example would be more like an impact on mankind and wealth for humanity. Think about that with those technologies combined with 3D printing, you can have new class of manufacturers anywhere in the world -- in Africa, for example. With real-time engineering, some of the concepts that we are demonstrating today, you have designing.

Another part of PTC is Computer-Aided Design (CAD) systems and Product Lifecycle Management (PLM), and we're showing real-time engineering on the floor again. You design those products and you do quick prototyping with your 3D printing. That could be anywhere in the world. And you have your users testing the real thing, understanding whether your engineering choices were relevant, if there are some differences between the digital model and the physical model, this digital twin ID.

Then, you're back to the drawing board. So, a new class of manufacturers that we don’t even know, serving customers across the world and creating wealth in areas that are (not) up to date, not industrialized.

Gardner: It's interesting that if you have a 3D printer you might not need to worry about inventory or supply chain.

Hill: Just to add on that one point, the bit that really, really excites me about where we are with technology, as a whole, not even just within the collaboration, you have 3D printing, you have the availability of open software. We all provide very software-centric products, stuff that you can adjust yourself, and that is the way of the future.

That means that among the changes that we see in the manufacturing industry, the next great idea could come from someone who has been in the production plant for 20 years, or it could come from Phil who works in the bank down the road, because at a really good price point, he has the access to that technology, and that is one of the coolest things that I can think about right now.

Where we've seen this sort of development and this use of these sort of technologies and implementations and seen a massive difference, look at someone like Duke Energy in the US. We worked with them before we realized where our capabilities were, never mind how we could implement a great solution with PTC and with HPE. Even there, based on our own technology, those guys in the para-production side of things in some legacy equipment decided to try and do this sort of application, to have predictive maintenance to be able to see what’s going on in their assets, which are across the continent.

They began this at the start of 2013 and they have seen savings of an estimated $50 billion up to this point. That’s a number.

Gardner: That is a big number, yes. I'm afraid we'll have to leave it there. We've been exploring the rapidly evolving architectural shift of moving advanced IT capabilities to the edge to support Internet of Things requirements for operational integrity benefits.

And we've learned how converged systems and high-performance data analysis platforms are bringing in essence the data center to the operational technology edge, and we've heard about some great results from that evolution.

Please join me in thanking our panelists, Phil McRell, General Manager of IoT Consortium at PTC; Gavin Hill, IoT Marketing Engineer for Northern Europe at NI in London, and Olivier Frank, Senior Director of Worldwide Business Development and Sales for Edgeline IoT Systems at HPE.

And thanks as well to our audience for joining us for this Hewlett Packard Enterprise Voice of the Customer Digital Transformation Discussion.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored interviews. Thanks again for listening, and please come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on the rapidly evolving architectural shift of moving advanced IT capabilities to the edge to support Internet of Things requirements for operational integrity benefits. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.

You may also be interested in:

Thursday, February 23, 2017

IDOL-Powered Appliance Delivers Better Decisions Via Comprehensive Business Information Searches

Transcript of a discussion on how HPE's platform and data solutions have been combined by SEC 1.01 for an appliance approach to index and deliver comprehensive business information results.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition to the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation. Stay with us now to learn how agile businesses are fending off disruption in favor of innovation.

Gardner
Our next case study highlights how a Swiss engineering firm created an appliance that quickly deploys to index and deliver comprehensive business information. It performs a simulation across thousands of formats and hundreds of languages and then provides via a simple search interface unprecedented access to trends, leads, and the makings of highly informed business decisions.

We will now explore how SEC 1.01 AG delivers a truly intelligent services solution -- one that returns new information to ongoing queries and combines internal and external information on all sorts of resources to produce a 360-degree view of end users’ areas of intense interest.

Join us as we learn how finding and using the best available information can be done in about half the usual time. We're here with our guest David Meyer, Chief Technology Officer at SEC 1.01 AG in Switzerland.
 
Welcome, David.

David Meyer: Thank you.

Meyer
Gardner: What are some of the trends that are driving the need for what you've developed. It's called the i5 appliance?

Meyer: The most important thing is that we can provide instant access to company-relevant information. This is one of today’s biggest challenges that we address with our i5 appliance.

Decisions are only as good as the information bases they are made on. The i5 provides the ability to access more complete information bases to make substantiated decisions. Also, you don’t want to search all the time; you want to be proactively informed. We do that with our agents and our automated programs that are searching for new information that you're interested in.

Gardner: As an organization, you've been around for quite a while and involved with  large applications, packaged applications -- SAP, for example and R/3 -- but over time, more data sources and ability to gather information came on board, and you saw the need in the market for this appliance. Tell us a little bit about what led you to create it?

Accelerating the journey

Meyer: We started to dive into big data about the time that HPE acquired Autonomy, December 2011, and we saw that it’s very hard for companies to start to become a data-driven organization. With the i5 appliance, we would like to help companies accelerate their journey to become such a company.

Gardner: Tell us what you mean by a 360-degree view? What does that really mean in terms of getting the right information to the right people at the right time?

Meyer: In a company's information scope, you don’t just talk about internal information, but you also have external information like news feeds, social media feeds, or even governmental or legal information that you need and don’t have to time to search for every day.

So, you need to have a search appliance that can proactively inform you about things that happen outside. For example, if there's a legal issue with your customer or if you're in a contract discussion and your partner loses his signature authority to sign that contract, how would you get this information if you don't have support from your search engine?
Mission Critical
Server Choices

Have Never Been Better
Gardner: And search has become such a popular paradigm for acquiring information, asking a question, and getting great results. Those results are only as good as the data and content they can access. Tell us a little bit about your company SEC 1.01 AG, your size and your scope or your market. Give us a little bit of background about your company.

Meyer: We've been an HPE partner for 26 years, and we build business-critical platforms based on HPE hardware and also the HPE operating system, HP-UX. Since the merger of Autonomy and HPE in 2011, we started to build solutions based on HPE's big-data software, particularly IDOL and Vertica.

Gardner: What was it about the environment that prevented people from doing this on their own? Why wouldn't you go and just do this yourself in your own IT shop?

Meyer: The HPE IDOL software ecosystem, is really an ecosystem of different software, and these parts need to be packed together to something that can be installed very quickly and that can provide very quick results. That’s what we did with the i5 appliance.

We put all this good software from HPE IDOL together into one simple appliance, which is simple to install. We want to accelerate the time that is needed to start with big data to get results from it and to get started with the analytical part of using your data and gain money out of it.

Multiple formats

Gardner: As we mentioned earlier, getting the best access to the best data is essential. There are a lot of APIs and a lot of tools that come with the IDOL ecosystem as you described it, but you were able to dive into a thousand or more file formats, support a 150 languages, and 400 data sources. That's very impressive. Tell us how that came about.

Meyer: When you start to work with unstructured data, you need some important functionality. For example, you need to have support for lot of languages. Imagine all these social media feeds in different languages. How do you track that if you don't support sentiment analysis on these messages?

On the other hand, you also need to understand any unstructured format. For example, if you have video broadcasts or radio broadcasts and you want to search for the content inside these broadcasts, you need to have a tool to translate the speech to text. HPE IDOL brings all the functionality that is needed to work with unstructured data, and we packed that together in our i5 appliance.

Gardner: That includes digging into PDFs and using OCR. It's quite impressive how deep and comprehensive you can be in terms of all the types of content within your organization.
Access the Free
HPE Vertica

Community Edition
How do you physically do this? If it's an appliance, you're installing it on-premises, you're able to access data sources from outside your organization, if you choose to do that, but how do you actually implement this and then get at those data sources internally? How would an IT person think about deploying this?

Meyer: We've prepared installable packages. Mainly, you need to have connectors to connect to repositories, to data ports. For example, if you have a Microsoft Exchange Server, you have a connector that understands very well how the Exchange server can communicate to that connector. So, you have the ability to connect to that data source and get any content including the metadata.

You talk about metadata for an e-mail, for example, the “From” to “To”, to “Subject,” whatever. You have the ability to put all that content and this metadata into a centralized index, and then you're able to search that information and refine the information. Then, you have a reference to your original document.

When you want to enrich the information that you have in your company with external information, we developed a so-called SECWebConnector that can capture any information from the Internet. For example, you just need to enter an RSS feed or a webpage, and then you can capture the content and the metadata you want it to search for or that is important for your company.

Gardner: So, it’s actually quite easy to tailor this specifically to an industry focus, if you wish, to a geographic focus. It’s quite easy to develop an index that’s specific to your organization, your needs, and your people.

Informational scope

Meyer: Exactly. In our crowded informational system that we have with the Internet and everything, it’s important that companies can choose where they want to have the information that is important for them. Do I need legal information, do I need news information, do I need social media information, and do I need broadcasting information? It’s very important to build your own informational scope that you want to be informed about, news that you want to be able to search for.

Gardner: And because of the way you structured and engineered this appliance, you're not only able to proactively go out and request things, but you can have a programmatic benefit, where you can tell it to deliver to you results when they arise or when they're discovered. Tell us a little bit how that works.

Meyer: We call them agents. You can define which topics you're interested in, and when some new documents are found by that search or by that topic, then you get informed, with an email or with a push notification on the mobile app.

Gardner: Let’s dig into a little bit of this concept of an appliance. You're using IDOL and you're using Vertica, the column-based or high-performance analytics engine, also part of HPE, but soon to be part of Micro Focus. You're also using 3PAR StoreServ and ProLiant DL380 servers. Tell us how that integration happened and why you actually call this an appliance, rather than some other name?
In our crowded informational system that we have with the Internet and everything, it’s important that companies can choose where they want to have the information that is important for them.

Meyer: Appliance means that all the software is patched together. Every component can talk to the others, talks the same language, and can be configured the same way. We preconfigure a lot, we standardize a lot, and that’s the appliance thing.

And it’s not bound on hardware. So, it doesn’t need to be this DL380 or whatever. It also depends on how big your environment will be. It can also be a c7000 Blade Chassis or whatever.

When we install an appliance, we have one or two days until it’s installed, and then it starts the initial indexing program, and this takes a while until you have all the data in the index. So, the initial load is big, but after two or three days, you're able to search for information.

You mentioned the HPE Vertica part. We use Vertica to log every action that goes on, on the appliance. On one hand, this is a security feature. You need to prove if nobody has found the salary list, for example. You need to prove that and so you need to log it.

On the other hand, you can analyze what users are doing. For example, if they don’t find something and it’s always the same thing that people are searching in the company and can't find, perhaps there's some information you need to implement into the appliance.

Gardner: You mentioned security and privileges. How does the IT organization allow the right people to access the right information? Are you going to use some other policy engine? How does that work?

Mapped security

Meyer: It's included. It's called mapped security. The connector takes the security information with the document and indexes that security information within the index. So, you will never be able to find a document that you don't have access to in your environment. It's important that this security is given by default.

Gardner: It sounds to me, David, like were, in a sense, democratizing big data. By gathering and indexing all the unstructured data that you can possibly want to, point at it, and connect to, you're allowing anybody in a company to get access to queries without having to go through a data scientist or a SQL query author. It seems to me that you're really opening up the power of data analysis to many more people on their terms, which are basic search queries. What does that get an organization? Do you have any examples of the ways that people are benefiting by this democratization, this larger pool of people able to use these very powerful tools?

Meyer: Everything is more data-driven. The i5 appliance can give you access to all of that information. The appliance is here to simplify the beginning of becoming a data-driven organization and to find out what power is in the organization's data.
Mission Critical
Server Choices

Have Never Been Better
For example, we enabled a Swiss company called Smartinfo to become a proactive news provider. That means they put lots of public information, newspapers, online newspapers, TV broadcasts, radio broadcasts into that index. The customers can then define the topics they're interested in and they're proactively informed about new articles about their interests.

Gardner: In what other ways do you think this will become popular? I'm guessing that a marketing organization would really benefit from finding relationships within their internal organization, between product and service, go-to market, and research and development. The parts of a large distributed organization don't always know what the other part is doing, the unknown unknowns, if you will. Any other examples of how this is a business benefit?

Meyer: You mentioned the marketing organization. How could a marketing organization listen what customers are saying? For example, on social media they're communicating there, and when you have an engine like i5, you can capture these social media feeds, you can do sentiment analysis on that, and you will see an analyzed view on what's going on about your products, company, or competitors.

You can detect, for example, a shitstorm about your company, a shitstorm about your competitor, or whatever. You need to have an analytic platform to see that, to visualize that, and this is a big benefit.

On the other hand, it's also this proactive information you get from it, where you can see that your competitor has a new campaign and you get that information right now because you have an agent with the customer's name. You can see that there is something happening and you can act on that information.

Gardner: When you think about future capabilities, are there other aspects that you can add on? It seems extensible to me. What would we be talking about a year from now, for example?

Very extensible

Meyer: It's pretty much extensible. I think about all these different verticals. You can expand it for the health sector, for the transportation sector, whatever. It doesn't really matter.

We do network analysis. That means when you prepare yourself to visit a company, you can have a network picture, what relationships this company has, what employees work there, who is a shareholder of that company, which company has contracts with any of other companies?

This is a new way to get a holistic image of a company, a person, or of something that you want to know. It's thinking how to visualize things, how to visualize information, and that's the main part we are focusing on. How can we visualize or bring new visualizations to the customer?

Gardner: In the marketplace, because it's an ecosystem, we're seeing new APIs coming online all the time. Many of them are very low cost and, in many cases, open source or free. We're also seeing the ability to connect more adequately to LinkedIn and Salesforce, if you have your license for that of course. So, this really seems to me a focal point, a single pane of glass to get a single view of a customer, a market, or a competitor, and at the same time, at an affordable price.

Let's focus on that for a moment. When you have an appliance approach, what we're talking about used to be only possible at very high cost, and many people would need to be involved -- labor, resources, customization. Now, we've eliminated a lot of the labor, a lot of the customization, and the component costs have come down.
Access the Free
HPE Vertica

Community Edition
We've talked about all the great qualitative benefits, but can we talk about the cost differential between what used to be possible five years ago with data analysis, unstructured data gathering, and indexing, and what you can do now with the i5?

Meyer: You mentioned the price. We have an OEM contract, and that that's something that makes us competitive in the market. Companies can build their own intelligence service. It's affordable also for small and medium businesses. It doesn't need to be a huge company with own engineering and IT staff. It's affordable, it's automated, it's packed together, and simple to install.

Companies can increase the workplace performance and shorten the processes. Anybody has access to all the information they need in their daily work, and they can focus more on their core business. They don't lose time in searching for information and not finding it and stuff like that.

Gardner: For those folks who have been listening or reading, are intrigued by this, and want to learn more, where would you point them? How can they get more information on the i5 appliance and some of the concepts we have been discussing?

Meyer: That's our company website, sec101.ch. There you can find any information you would like to have.
Anybody has access to all the information they need in their daily work, and they can focus more on their core business. They don't lose time in searching for information and not finding it and stuff like that.

Gardner: And this is available now.

Meyer: This is available now.

Gardner: Well, great, I'm afraid we will have to leave it there. We have been exploring how SEC 1.01 AG delivers a true intelligence services solution, one that returns new information to ongoing queries and combines internal and external information on all sorts of sources to produce a 360 degree view of any user's interests that they choose.

We've learned how HPE's platform and data solutions have also been uniquely combined by SEC 1.01 for an appliance approach that quickly deploys to index and deliver these comprehensive business information results.

Please join me in thanking our guest, David Meyer, Chief Technology Officer at SEC 1.01 AG in Switzerland. Thank you so much, David.

Meyer: Thank you, Dana.

Gardner: And thanks to our audience as well for joining us for this Hewlett Packard Enterprise Voice of the Customer Digital Transformation discussion.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored interviews. Thanks again for listening, and please come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how HPE's platform and data solutions have been combined by SEC 1.01 for an appliance approach to index and deliver comprehensive business information results. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.

You may also be interested in: