Thursday, September 05, 2019

How the Catalyst Program Seeds an Infrastructure Innovation Ecosystem for Next Generations of HPC, AI, and Supercomputing

https://www.hpe.com/us/en/home.html

Transcript of a discussion on how the Catalyst program in the UK is seeding the advancement of the ARM CPU architecture for HPC as well as a vibrant software ecosystem.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on high-performance computing (HPC) trends and innovations.

Gardner
Our next discussion explores a program to expand a variety of CPUs that support supercomputer and artificial intelligence (AI)-intensive workloads. We will now learn how the Catalyst program in the UK is seeding the advancement of the ARM CPU architecture for HPC as well as establishing a vibrant software ecosystem around it.

Stay with us now as we hear about unlocking new choices and innovation for the next generations of supercomputing. Please join me in welcoming our guests, Dr. Eng Lim Goh, Vice President and Chief Technology Officer for HPC and AI at Hewlett Packard Enterprise (HPE). Welcome, Dr. Goh.

Eng Lim Goh: Hi, Dana. Thank you.

Gardner: We are here also with Professor Mark Parsons, Director of the Edinburgh Parallel Computing Centre (EPCC) at the University of Edinburgh. Welcome, Professor Parsons.

Mark Parsons: Hi, Dana.

Gardner: Mark, why is there a need now for more variety of choice for CPU architectures for such use cases as HPC, AI, and supercomputing?

Parsons
Parsons: In some ways this discussion is a bit odd because we have had huge variety over the years in supercomputing with regard to processors. It’s really only the last five to eight years that we’ve ended up with the majority of supercomputers being built from the Intel x86 architecture.

It’s always good in supercomputing to be on the leading edge of technology and getting more variety in the processor is really important. It is interesting to seek different processor designs for better performance for AI or supercomputing workloads. We want the best type of processors for what we want to do today.

Gardner: What is the Catalyst program? Why did it come about? And how does it help address those issues?

Parsons: The Catalyst UK program is jointly funded by a number of large companies and three universities: The University of Bristol, the University of Leicester, and the University of Edinburgh. It is UK-focused because Arm Holdings is based in the UK, and there is a long history in the UK of exploring new processor technologies.


Through Catalyst, each of the three universities hosts a 4,000-core ARM processor-based system. We are running them as services. At my university, for example, we now have a number of my staff using this system. But we also have external academics using it, and we are gradually opening it up to other users.

Catalyst for change in processor

We want as many people as possible to understand how difficult it will be to port their code to ARM. Or, rather -- as we will explore in this podcast -- how easy it is.

You only learn by breaking stuff, right? And so, we are going to learn which bits of the software tool chain, for example, need some work. [Such porting is necessary] because ARM predominantly sat in the mobile phone world until recently. The supercomputing and AI world is a different space for the ARM processor to be operating in.

Gardner: Eng Lim, why is this program of interest to HPE? How will it help create new opportunity and performance benchmarks for such uses as AI?

Goh
Goh: Mark makes a number of very strong points. First and foremost, we are very keen as a company to broaden the reach of HPC among our customers. If you look at our customer base, a large portion of them come from the commercial HPC sites, the retailers, banks, and across the financial industry. Letting them reach new types of HPC is important and a variety of offerings makes it easier for them.

The second thing is the recent reemergence of more AI applications, which also broadens the user base. There is also a need for greater specialization in certain areas of processor capabilities. We believe in this case, the ARM processor -- given the fact that it enables different companies to build innovative variations of the processor – will provide a rich set of new options in the area of AI.

Gardner: What is it, Mark, about the ARM architecture and specifically the Marvell ThunderX2 ARM processor that is so attractive for these types of AI workloads?

Expanding memory for the future 

Parsons: It’s absolutely the case that all numerical computing -- AI, supercomputing, and desktop technical computing -- is controlled by memory bandwidth. This is about getting data to the processor so the processor core can act on it.

What we see in the ThunderX2 now, as well as in future iterations of this processor, is the strong memory bandwidth capabilities. What people don’t realize is a vast amount of the time, processor cores are just waiting for data. The faster you get the data to the processor, the more compute you are going to get out with that processor. That’s one particular area where the ARM architecture is very strong.

Goh: Indeed, memory bandwidth is the key. Not only in supercomputing applications, but especially in machine learning (ML) where the machine is in the early phases of learning, before it does a prediction or makes an inference.
How UK universities
Collaborate with HPE
To Advance ARM-Based Supercomputing
It has to go through the process of learning, and this learning is a highly data-intensive process. You have to consume massive amounts of historical data and examples in order to tune itself to a model that can make good predictions. So, memory bandwidth is utmost in the training phase of ML systems.

And related to this is the fact that the ARM processor’s core intellectual property is available to many companies to innovate around. More companies therefore recognize they can leverage that intellectual property and build high-memory bandwidth innovations around it. They can come up with a new processor. Such an ability to allow different companies to innovate is very valuable.

Gardner: Eng Lim, does this fit in with the larger HPE drive toward memory-intensive computing in general? Does the ARM processor fit into a larger HPE strategy?

https://en.wikipedia.org/wiki/Arm_Holdings
Goh: Absolutely. The ARM processor together with the other processors provide choice and options for HPE’s strategy of being edge-centric, cloud-enabled, and data-driven.

Across that strategy, the commonality is data movement. And as such, the ARM processor allowing different companies to come in to innovate will produce processors that meet the needs of all these various kinds of sectors. We see that as highly valuable and it supports our strategy.

Gardner: Mark, Arm Holdings controls the intellectual property, but there is a budding ecosystem both on the processor design as well as the software that can take advantage of it. Tell us about that ecosystem and why the Catalyst UK program is facilitating a more vibrant ecosystem.

The design-to-build ecosystem 

Parsons: The whole Arm story is very, very interesting. This company grew out of home computing about 30 to 40 years ago. The interesting thing is the way that they are an intellectual property company, at the end of the day. Arm Holdings itself doesn’t make processors. It designs processors and sells those designs to other people to make.
We've had this wonderful ecosystem of different companies making their own ARM processors or making them for other people. It's no surprise it's the most common processor in the world today.

So, we’ve had this wonderful ecosystem of different companies making their own ARM processors or making them for other people. With the wide variety of different ARM processors in mobile phones, for example, there is no surprise that it’s the most common processor in the world today.

Now, people think that x86 processors rule the roost, but actually they don’t. The most common processor you will find is an ARM processor. As a result, there is a whole load of development tools that come both from ARM and also within the developer community that support people who want to develop code for the processors.

In the context of Catalyst UK, in talking to Arm, it’s quite clear that many of their tools are designed to meet their predominant market today, the mobile phone market. As they move into the higher-end computing space, it’s clear we may find things in the programs where the compiler isn’t optimized. Certain libraries may be difficult to compile, and things like that. And this is what excites me about the Catalyst program. We are getting to play with leading-edge technology and show that it is easy to use all sorts of interesting stuff with it.
How UK universities
Collaborate with HPE
To Advance ARM-Based Supercomputing
Gardner: And while the ARM CPU is being purpose-focused for high-intensity workloads, we are seeing more applications being brought in, too. How does the porting process of moving apps from x86 to ARM work? How easy or difficult is it? How does the Catalyst UK program help?

Parsons: All three of the universities are porting various applications that they commonly use. At the EPCC, we run the national HPC service for the UK called ARCHER. As part of that we have run national [supercomputing] services since 1994, but as part of the ARCHER service, we decided for the first time to offer many of the common scientific applications as modules.

You can just ask for the module that you want to use. Because we saw users compiling their own copies of code, we had multiple copies, some of them identically compiled, others not compiled particularly well.

https://www.ed.ac.uk/
So, we have a model of offering about 40 codes on ARCHER as precompiled where we are trying to keep them up to date and we patch them, etc. We have 100 staff at EPCC that look after code. I have asked those staff to get an account on the Catalyst system, take that code across and spend an afternoon trying to compile. We already know for some that they just compile and run. Others may have some problems, and it’s those that we’re passing on to ARM and HPE, saying, “Look, this is what we found out.”

The important thing is that we found there are very few programs [with such problems]. Most code is simply recompiling very, very smoothly.

Gardner: How does HPE support that effort, both in terms of its corporate support but also with the IT systems themselves?

ARM’s reach 

Goh: We are very keen about the work that Mark and the Catalyst program are doing. As Mark mentioned, the ARM processor came more from the edge-centric side of our strategy. In mobile phones, for example.

Now we are very keen to see how far these ARM systems can go. Already we have shipped to the US Department of Energy at the Sandia National Lab a large ARM processor-based supercomputer called Astra. These efforts are ongoing in the area of HPC applications. We are very keen to see how this processor and the compilers for it work with various HPC applications in the UK and the US.


Gardner: And as we look to the larger addressable market, with the edge and AI being such high-growth markets, it strikes me that supercomputing -- something that has been around for decades -- is not fully mature. We are entering a whole new era of innovation.

Mark, do you see supercomputing as in its heyday, sunset years, or perhaps even in its infancy?

Parsons: I absolutely think that supercomputing is still in its infancy. There are so many bits in the world around us that we have never even considered trying to model, simulate, or understand on supercomputers. It’s strange because quite often people think that supercomputing has solved everything -- and it really hasn’t. I will give you a direct example of that.
Supercomputing is still in its infancy. There are so many bits in the world around us that we have never even considered trying to model, simulate, or understand on supercomputers. It's strange because people think that supercomputers have already solved everything.

A few years ago, a European project I was running won an award for simulating the highest accuracy of water flowing through a piece of porous rock. It took over a day on the whole of the national service [to run the simulation]. We won a prize for this, and we only simulated 1 cubic centimeter of rock.

People think supercomputers can solve massive problems -- and they can, but the universe and the world are complex. We’ve only scratched the surface of modeling and simulation.

This is an interesting moment in time for AI and supercomputing. For a lot of data analytics, we have at our fingertips for the very first time very, very large amounts of data. It’s very rich data from multiple sources, and supercomputers are getting much better at handling these large data sources.

The reason the whole AI story is really hot now, and lots of people are involved, is not actually about the AI itself. It’s about our ability to move data around and use our data to train AI algorithms. The link directly into supercomputing is because in our world we are good at moving large amounts of data around. The synergy now between supercomputing and AI is not to do with supercomputing or AI – it is to do with the data.

Gardner: Eng Lim, how do you see the evolution of supercomputing? Do you agree with Mark that we are only scratching the surface?

Top-down and bottom-up data crunching 

Goh: Yes, absolutely, and it’s an early scratch. It’s still very early. I will give you an example.

Solving games is important to develop a method or strategy for cyber defense. If you just take the most recent game that machines are beating the best human players, the game of Go, is much more complex than chess in terms of the number of potential combinations. The number of combinations is actually 10171, if you comprehensively went through all the different combinations of that game.
How UK universities
Collaborate with HPE
To Advance ARM-Based Supercomputing
You know how big that number is? Well, okay, if we took all computers in the world together, all the supercomputers, all of the computers in the data centers of the Internet companies and put them all together, run them for 100 years -- all you can do is 1030 , which is so very far from 10171. So, you can see just by this one game example alone that we are very early in that scratch.

A second group of examples relates to new ways that supercomputers are being used. From ML to AI, there is now a new class of applications changing how supercomputers are used. Traditionally, most supercomputers have been used for simulation. That’s what I call top-down modeling. You create your model out of physics equations or formulas and then you run that model on a supercomputer to try and make predictions.

https://en.wikipedia.org/wiki/Arm_Holdings
The new way of making predictions uses the ML approach. You do not begin with physics. You begin with a blank model and you keep feeding it data, the outcomes of history and past examples. You keep feeding data into the model, which is written in such a way that for each new piece of data that is fed, a new prediction is made. If the accuracy is not high, you keep tuning the model. Over time -- with thousands, hundreds of thousand, and even millions of examples -- the model gets tuned to make good predictions. I call this the bottom-up approach.

Now we have people applying both approaches. Supercomputers used traditionally in a top-down simulation are also employing the bottom-up ML approach. They can work in tandem to make better and faster predictions.

Supercomputers are therefore now being employed for a new class of applications in combination with the traditional or gold-standard simulations.

Gardner: Mark, are we also seeing a democratization of supercomputing? Can we extend these applications and uses? Is what’s happening now decreasing the cost, increasing the value, and therefore opening these systems up to more types of uses and more problem-solving?

Cloud clears the way for easy access 

Parsons: Cloud computing is having a big impact on everything that we do, to be quite honest. We have all of our photos in the cloud, our music in the cloud, et cetera. That’s why EPCC last year got rid of its file server. All our data running the actual organization is in the cloud.

The cloud model is great inasmuch as it allows people who don’t want to operate and run a large system 100 percent of the time the ability to access these technologies in ways they have never been able to do before.
The cloud model is great inasmuch as it allows people who don't want to operate and run a large system 100 percent of the time the ability to access these technologies in ways they have never been able to do before.

The other side of that is that there are fantastic software frameworks now that didn’t exist even five years ago for doing AI. There is so much open source for doing simulations.

It doesn’t mean that an organization like EPCC, which is a supercomputing center, will stop hosting large systems. We are still great aggregators of demand. We will still have the largest computers. But it does mean that, for the first time through the various cloud providers, any company, any small research group and university, has access to the right level of resources that they need in a cost-effective way.

Gardner: Eng Lim, do you have anything more to offer on the value and economics of HPC? Does paying based on use rather than a capital expenditure change the game?

More choices, more innovation 

Goh: Oh, great question. There are some applications and institutions with processes that work very well with a cloud, and there are some applications that don’t and processes that don’t. That’s part of the reason why you embrace both. And, in fact, we at HPE embrace the cloud and we also we build on-premises solutions for our customers, like the one at the Catalyst UK program.

We also have something that is a mix of the two. We call that HPE GreenLake, which is the ability for us to acquire the system the customer needs, but the customer pays per use. This is software-defined experience on consumption-based economics.

These are some of the options we put together to allow choice for our customers, because there is a variation of needs and processes. Some are more CAPEX-oriented in a way they acquire resources and others are more OPEX-oriented.

https://www.hpe.com/us/en/home.html
Gardner: Do you have examples of where some of the fruits of Catalyst, and some of the benefits of the ecosystem approach, have led to applications, use cases, and demonstrated innovation?

Parsons: What we are trying to do is show how easy ARM is to use. We have taken some really powerful, important code that runs every day on our big national services and have simply moved them across to ARM. Users don’t really understand or don’t need to understand they are running on a different system. It’s that boring.

We have picked up one or two problems with code that probably exist in the x86 version, but because you are running a new processor, it exposes it more, and we are fixing that. But in general -- and this is absolutely the wrong message for an interview -- we are proceeding in a very boring way. The reason I say that is, it’s really important that this is boring, because if we don’t show this is easy, people won’t put ARM on their next procurement list. They will think that it’s too difficult, that it’s going to be too much trouble to move codes across.

One of the aims of Catalyst, and I am joking, is definitely to be boring. And I think at this point in time we are succeeding.

More interestingly, though, another aim of Catalyst is about storage. The ARM systems around the world today still tend to do storage on x86. The storage will be running on Lustre or BeeGFS server, all sitting on x86 boxes.

We have made a decision to do everything on ARM, if we can. At the moment, we are looking at different storage software on ARM services. We are looking at Ceph, at Lustre, at BeeGFS, because unless you have the ecosystem running in ARM as well, people won’t think it’s as pervasive of a solution as x86, or Power, or whatever.

The benefit of being boring 

Goh: Yes, in this case boring is good. Seamless movement of code across different platforms is the key. It’s very important for an ecosystem to be successful. It needs to be easy to develop code for and it, and it needs to be easy to port. And those are just as important with our commercial HPC systems for the broader HPC customer base.

In addition to customers writing their own code and compiling it well and easily to ARM, we also want to make it easy for the independent software vendors (ISVs) to join and strengthen this ecosystem.

https://www.ed.ac.uk/
Parsons: That is one of the key things we intend to do over the next six months. We have good relationships, as does HPE, with many of the big and small ISVs. We want to get them on a new kind of system, let them compile their code, and get some help to do it. It’s really important that we end up with ISV code on ARM, all running successfully.

Gardner: If we are in a necessary, boring period, what will happen when we get to a more exciting stage? Where do you see this potentially going? What are some of the use cases using supercomputers to impact business, commerce, public services, and public health?

Goh: It’s not necessarily boring, but it is brilliantly done. There will be richer choices coming to supercomputing. That’s the key. Supercomputing and HPC need to reach a broader customer base. That’s the goal of our HPC team within HPE.

Over the years, we have increased our reach to the commercial side, such as the financial industry and retailers. Now there is a new opportunity coming with the bottom-up approach of using HPC. Instead of building models out of physics, we train the models with example data. This is a new way of using HPC. We will reach out to even more users.
How UK universities
Collaborate with HPE
To Advance ARM-Based Supercomputing
So, the success of our supercomputing industry is getting more users, with high diversity, to come on board.

Gardner: Mark, what are some of the exciting outcomes you anticipate?

Parsons: As we get more experience with ARM it will become a serious player. If you look around the world today, in Japan, for example, they have a big new ARM-based supercomputer that’s going to be similar to the Thunder X2 when it’s launched.

I predict in the next three or four years we are going to see some very significant supercomputers up at the X2 level, built from ARM processors. Based on what I hear, the next generations of these processors will produce a really exciting time.

Gardner: I’m afraid we’ll have to leave it there. We have been exploring a program to expand the variety of CPUs that support supercomputers and AI workloads. And we have specifically learned how the Catalyst UK program is seeding the advancement of the ARM CPU architecture for HPC, as well as helping to establish a vibrant software ecosystem.

Please join me in thanking our guests, Dr. Eng Lim Goh, Vice President and Chief Technology Officer for HPC and AI at HPE. Thank you so much, Eng Lim.

Goh: Thank you, Dana.

Gardner: We have also been joined by Professor Mark Parsons, Director of EPCC at the University of Edinburgh. Thank you, sir.

Parsons: Thank you, Dana. It’s been a pleasure.


Gardner: And a big thank you as well to our audience for joining this BriefingsDirect Voice of the Customer HPC trends and innovations discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored discussions.

Thanks again for listening. Pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how the Catalyst program in the UK is seeding the advancement of the ARM CPU architecture for HPC as well as a vibrant software ecosystem. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in:

Thursday, August 29, 2019

HPE and PTC Join Forces to Deliver Best Outcomes from the OT-IT Productivity Revolution

https://www.ptc.com/en/

A discussion on how the latest data analysis platforms bring operational technology benefits to the edge for real-time insights in manufacturing.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories.

Gardner
Our next edge computing trends discussion explores the rapidly evolving confluence of operational technology (OT) and Internet of Things (IoT). New advances in data processing, real-time analytics, and platform efficiency have prompted innovative and impactful OT approaches at the edge.

We’ll now hear how such data analysis platforms bring manufacturers data-center caliber benefits for real-time insights where they are needed most.

To hear more about the latest capabilities in gaining unprecedented operational insights, please join me in welcoming Riaan Lourens, Vice President of Technology in the Office of the Chief Technology Officer at PTC. Welcome, Riaan.

Riaan Lourens: Hey, Dana. Thanks for having me.


Gardner: We are also here with Tripp Partain, Chief Technology Officer of IoT Solutions at Hewlett Packard Enterprise (HPE). Welcome, Tripp.

Tripp Partain: Hey, Dana. Thanks a lot. I appreciate the opportunity.

Gardner: Riaan, what kinds of new insights are manufacturers seeking into how their operations perform?

Lourens
Lourens: We are in the midst of a Fourth Industrial Revolution, which is really an extension of the third, where we used electronics and IT to automate manufacturing. Now, the fourth is the digital revolution, a fusion of technology and capabilities that blur the lines between the physical and digital worlds.

With the influx of these technologies, both hardware and software, our customers -- and manufacturing as a whole, as well as the discrete process industries -- are finding opportunities to either save or make more money. The trend is focused on looking at technology as a business strategy, as opposed to just pure IT operations.

Digital revolution in business

There are a number of examples of how our customers have leveraged technology to drive their business strategy.

Gardner: Are we entering a golden age by combining what OT and IT have matured into over the past couple of decades? If we call this Industrial Revolution 4.0, (I4.0) there must be some kind of major opportunities right now.

Lourens: There are a lot of initiatives out there, whether it’s I4.0, Made in China 2025, or the Smart Factory Initiative in the US. By democratizing the process of providing value -- be it with cloud capabilities, edge computing, or anything in between – we are inherently providing options for manufacturers to solve problems that they were not able to solve before.
The opportunity for manufacturers today allows them to solve problems that they face almost immediately. There is quick time-to-value by leveraging technology that is consumable.

If you look at it from a broader technology standpoint, in the past we had very large, monolith-like deployments of technology. If you look at it from the ISA-95 model, like Level 3 or Level 4, your MES deployments or large-scale enterprise resource planning (ERP), those were very large deployments that took many years. And the return on investment (ROI) the manufacturers saw would potentially pay off over many years.

The opportunity that exists for manufacturers today, however, allows them to solve problems that they face almost immediately. There is quick time-to-value by leveraging technology that is consumable. Then they can lift and drop and so scale [those new solutions] across the enterprise. That does make this an era the likes of which nobody has seen before.

Gardner: Tripp, do you agree that we are in a golden age here? It seems to me that we are able to both accommodate a great deal of diversity and heterogeneity of the edge, across all sorts of endpoints and sensors, but also bring that into a common-platform approach. We get the best of efficiency and automation.

Partain: There is a combination of two things. One, due to the smartphone evolution over the last 10 years, the types of sensors and chips that have been created to drive that at the consumer level are now at such reasonable price points you are able to apply these to industrial areas.

Partain
To Riaan’s point, the price points of these technologies have gotten really low -- but the capabilities are really high. A lot of existing equipment in a manufacturing environment that might have 20 or 30 years of life left can be retrofitted with these sensors and capabilities to give insights and compute capabilities at the edge. The capability to interact in real-time with those sensors provides platforms that didn’t exist even five years ago. That combines with the right software capabilities so that manufacturers and industrials get insights that they never had before into their processes.

Gardner: How is the partnership between PTC and HPE taking advantage of this new opportunity? It seems you are coming from different vantage points but reinforcing one another. How is the whole greater than the sum of the parts when it comes to the partnership?

Partnership for progress, flexibility

Lourens: For some context, PTC is a software vendor. Over the last 30 years we targeted our efforts at helping manufacturers either engineer software with computer-aided design (CAD) or product lifecycle management (PLM). We have evolved to our growth areas today of IoT solution platforms and augmented reality (AR) capabilities.

The challenge that manufacturers face today is not just a software problem. It requires a robust ecosystem of hardware vendors, software vendors, and solutions partners, such as regional or global systems integrators.

The reason we work very closely with HPE as an alliance partner is because HPE is a leader in the space. HPE has a strong offering of compute capabilities -- from very small gateway-level compute all the way through to hybrid technologies and converged infrastructure technologies.

Ultimately our customers need flexible options to deploy software at the right place, at the right time, and throughout any part of their network. We find that HPE is a strong partner on this front.

Gardner: Tripp, not only do we have lower cost and higher capability at the edge, we also have a continuum of hybrid IT. We can use on-premises micro-datacenters, converged infrastructure, private cloud, and public cloud options to choose from. Why is that also accelerating the benefits for manufacturers? Why is a continuum of hybrid IT – edge to cloud -- an important factor?

Partain: That flexibility is required if you look at the industrial environments where these problems are occurring for our joint customers. If you look at any given product line where manufacturing takes place -- no two regions are the same and no two factories are the same. Even within a factory, a lot of times, no two production lines are the same.

https://www.ptc.com/en/partners/hpe

There is a wide diversity in how manufacturing takes place. You need to be able to meet those challenge with the customers to give them the deployment options that meet each of those environments.

It’s interesting. Factories don’t do enterprise IT-like deployments, where every factory takes on new capabilities at the same time. It’s much more balanced in the way that products are made. You have to be able to have that same level of flexibility in how you deploy the solutions, to allow it to be absorbed the same way the factories do all of their other types of processes.

We have seen the need for different levels of IT to match up to the way they are implemented in different types of factories. That flexibility meets them where they are and allows them to get to the value much quicker -- and not wait for some huge enterprise rollout, like what Riaan described earlier with ERP systems that take multiple years.

By leveraging new, hybrid, converged, and flexible environments, we allow a single plant to deploy multiple solutions and get results much quicker. We can also still work that into an enterprise-wide deployment -- and get a better balance between time and return.

https://www.ptc.com/en/
Gardner: Riaan, you earlier mentioned democratization. That jumped out at me. How are we able to take these advances in systems, software, and access and availability of deployments and make that consumable by people who are not data scientists? How are we able to take the results of what the technology does and make it actionable, even using things like AR?

Lourens: As Tripp described, every manufacturing facility is different. There are typically different line configurations, different programmable logic controller (PLC) configurations, different heterogeneous systems -- be it legacy IT systems or homegrown systems -- so the ability to leverage what is there is inherently important.

From a strategic perspective, PTC has two core platforms; one being our ThingWorx Platform that allows you to source data and information from existing systems that are there, as well as from assets directly via the PLC or by embedding software into machines.

We also have the ability to simplify and contextualize all of that information and make sense of it. We can then drive analytical insights out of the data that we now have access to. Ultimately we can orchestrate with end users in their different personas – be that the maintenance operator, supervisor, or plant manager -- enabling and engaging with these different users through AR.

Four capabilities for value 

There are four capabilities that allow you to derive value. Ultimately our strategy is to bring that up a level and to provide capabilities solutions to our end customers across four different areas.

https://www.ptc.com/en/products/iiot/thingworx-platformOne, we look at it from an enterprise operational intelligence perspective; the second is intelligent asset optimization; the third, digital workforce productivity, and fourth, scalable production management.

So across those four solution areas we can apply our technology together with that of our sourced partners. We allow our customers to find use-cases within those four solution areas that provides them a return on investment.

One example of that would be leveraging augmented work instructions. So instead of an operator going through a maintenance procedure by opening a folder of hundreds of pages of instructions, they can leverage new technology such as AR to guide the operator in process, and in situ, in terms of how to do something.

There are many use cases across those four solution areas that leverage the core capabilities across the IoT platform, ThingWorx, as well as the AR platform, Vuforia.


Gardner: Tripp, it sounds like we are taking the best of what people can do and the best of what systems and analytics can do. We also move from batch processing to real time. We have location-based services so we can tell where things and people are in new ways. And then we empower people in ways that we hadn’t done before, such as AR.

Are we at the point where we’re combining the best of cognitive human capabilities and machine capabilities?

Partain: I don’t know if we have gotten to the best yet, but probably the best of what we’ve had so far. As we continue to evolve these technologies and find new ways to look at problems with different technology -- it will continue to evolve.

We are getting to the new sweet spot, if you will, of putting the two together and being able to drive advancements forward. One of the things that’s critical has to do with where our current workforce is.

A number of manufacturers I talk to -- and I’ve heard similar from PTC’s customers and our joint customers -- is you are at a tipping point in terms of the current talent pool, with those currently employed and those getting close to retirement age.

https://www.hpe.com/us/en/home.htmlThe next generation that’s coming in is not going to have the same longevity and the same skill sets. Having these newer technologies and bringing these pieces together, it’s not only a new matchup based on the new technology – it’s also better suited for the type of workers carrying these activities forward. Manufacturing is not going away, but it’s going to be a very different generation of factory workers and types of technologies.

The solutions are now available to really enhance those jobs. We are starting to see all of the pieces come together. That’s where both IoT solutions -- but even especially AR solutions like PTC Vuforia -- really come into play.

Gardner: Riaan, in a large manufacturing environment, only small iterative improvements can make a big impact on the economics, the bottom line. What sort of future categorical improvements value are we looking at? To what degree do we have an opportunity to make manufacturing more efficient, more productive, more economically powerful?

Tech bridges skills gap, talent shortage

Lourens: If you look at it from the angle that Tripp just referred to, there are a number of increasing pressures across the board in the industrial markets via the workers’ skills gap. Products are also becoming more complex. Workspaces are becoming more complex. There are also increasing customer demands and expectations. Markets are just becoming more fiercely competitive.

But if you leverage capabilities such as AR -- which provides augmented 3-D work instructions, expert guidance, and remote assistance, training, and demonstrations -- that’s one area. If you combine that, to Tripp’s point, with the new IoT capabilities, then I think you can look at improvements such as reducing waste in processes and materials.
We have seen customers reducing by 30 percent unplanned downtime, which is a very common use case that we see manufacturers target. We also see reducing energy consumption by 3 to 7 percent. And we're looking at improving productivity by 20 to 30 percent.

We have seen customers reducing by 30 percent unplanned downtime, which is a very common use case that we see manufacturers target. We also see reducing energy consumption by 3 to 7 percent at a very large ship manufacturer, a customer of PTC’s. And we’re generally looking at improving productivity by 20 to 30 percent.

By leveraging this technology in a meaningful way to get iterative improvements, you can then scale it across the enterprise very rapidly, and multiple use cases can become part of the solution. In these areas of opportunity, very rapidly you get that ROI.

Gardner: Do we have concrete examples to help illustrate how those general productivity benefits come about?

Joint solutions reduce manufacturing pains 

Lourens: A joint-customer between HPE and PTC focuses on manufacturing and distributing reusable and recyclable food packaging containers. The company, CuBE Packaging Solutions, targeted protective maintenance in manufacturing. Their goal is to have the equipment notify them when attention is needed. That allows them to service what they need when they need to and focus on reducing unplanned downtime.

In this particular example, there are a number of technologies that play across both of our two companies. The HPE Nimble Storage capability and HPE Synergy technology were leveraged, as well as a whole variety of HPE Aruba switches and wireless access points, along with PTC’s ThingWorx solution platform.

https://www.ptc.com/en/products/iiot/thingworx-platform

The CuBE Packaging solution ultimately was pulled together through an ecosystem partner, Callisto Integration, which we both worked with very closely. In this use case, we not only targeted the plastic molding assets that they were monitoring, but the peripheral equipment, such as cooling and air systems, that may impact their operations. The goal is to avoid anything that could pause their injection molding equipment and plants.

Gardner: Tripp, any examples of use-cases that come to your mind that illustrate the impact?

Partain: Another joint-customer that comes to mind is Texmark Chemicals in Galena Park, Texas. They are using number of HPE solutions, including HPE Edgeline, our micro-datacenter. They are also using PTC ThingWorx and a number of other solutions.

They have very large pumps critical to the operation as they move chemicals and fluids in various stages around their plant in the refining process. Being able to monitor those in real time, predict potential failures before they happen, and use a combination of live data and algorithms to predict wear and tear, allows them to determine the optimal time to make replacements and minimize downtime.

https://www.hpe.com/us/en/servers/edgeline-systems.html
Such uses cases are one of the advantages when customers come and visit our IoT Lab in Houston. From an HPE standpoint, not only do they see our joint solutions in the lab, but we can actually take them out to the Texmark location and Texmark will host and allow you them see these technologies in real-time working at their facility.

Similar as Riaan mentioned, we started at Texmark with condition monitoring and now the solutions have moved into additional use cases -- whether it’s mechanical integrity, video as a sensor, and employee-safety-related use cases.

We started with condition monitoring, proved that out, got the technology working, then took that framework -- including best-in-class hardware and software -- and continued to build and evolve on top of that to solve expanded problems. Texmark has been a great joint customer for us.

Gardner: Riaan, when organizations hear about these technologies and the opportunity for some very significant productivity benefits, when they understand that more-and-more of their organization is going to be data-driven and real-time analysis benefits could be delivered to people in their actionable context, perhaps using such things as AR, what should they be doing now to get ready?

Start small

Lourens: Over the last eight years of working with ThingWorx, I have noticed the initial trend of looking at the technology versus looking at specific use-cases that provide real business value, and of working backward from the business value.

My recommendation is to target use cases that provide quick time-to-value. Apply the technology in a way that allows you to start small, and then iterate from there, versus trying to prove your ROI based on the core technology capabilities.

Ultimately understand the business challenges and how you can grow your top line or your bottom line. Then work backward from there, starting small by looking at a plant or operations within a plant, and then apply the technology across more people. That helps create a smart connected people strategy. Apply technology in terms of the process and then relative to actual machines within that process in a way that’s relevant to use cases -- that’s going to drive some ROI.

Gardner: Tripp, what should the IT organization be newly thinking? Now, they are tasked with maintaining systems across a continuum of cloud-to-edge. They are seeing micro-datacenters at the edge; they’re doing combinations of data-driven analytics and software that leads to new interfaces such as AR.

How should the IT organization prepare itself to take on what goes into any nook and cranny in almost any manufacturing environment?

IT has to extend its reach 

Partain: It’s about doing all of that IT in places where typically IT has had a little or no involvement. In many industrial and manufacturer organizations, as we go in and start having conversations, IT really has usually stopped at the datacenter back-end. Now there’s lots of technology in the manufacturing side, too, but it has not typically involved the IT department.

One of the first steps is to get educated on the new edge technologies and how they fit into the overall architecture. They need to have the existing support frameworks and models in place that are instantly usable, but also work with the business side and frame-up the problems they are trying to solve.

https://www.hpe.com/us/en/servers/edgeline-systems.html
As Riaan mentioned, being able to say, “Hey, here are the types of technologies we in IT can apply to this that you [OT] guys haven’t necessarily looked at before. Here’s the standardization we can help bring so we don’t end up with something completely different in every factory, which runs up your overall cost to support and run.”

It’s a new world. And IT is going to have to spend much more time with the part of the business they have probably spent the least amount of time with. IT needs to get involved as early as possible in understanding what the business challenges are and getting educated on these newer IoT, AR, virtual reality (VR), and edge-based solutions. These are becoming the extension points of traditional technology and are the new ways of solving problems.

Gardner: I’m afraid we’ll have to leave it there. We have been discussing the rapidly evolving confluence of OT and the IoT. And we have learned how data processing, real-time analytics, and platform efficiency are all prompting new OT approaches at the very edge of manufacturing.

Please join me in thanking our guests, Riaan Lourens, Vice President of Technology in the office of the CTO at PTC. Thank you so much, Riaan.

Lourens: Thanks for having me, Dana. It’s been a pleasure.

Gardner: And we have also been joined by Tripp Partain, Chief Technology Officer of IoT Solutions at HPE. Thank you so much, Tripp.

Partain: Yes, I enjoyed it. Thank you very much.


Gardner: And a big thank you to our audience as well for joining this BriefingsDirect Voice of the Customer digital transformation success story discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews.

Thanks again for listening, please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

 A discussion how the latest data analysis platforms bring operational technology benefits to the edge for real-time insights in manufacturing. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in: