Tuesday, October 10, 2017

Inside Story on HPC’s AI Role in Bridges Strategic Reasoning Research Project at CMU

Transcript of a discussion on how Carnegie Mellon University researchers are advancing strategic reasoning and machine learning capabilities using the latest in high performance computing.  

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories. Stay with us now to learn how agile businesses are fending off disruption -- in favor of innovation.

Our next high performance computing (HPC) success interview examines how strategic reasoning is becoming more common and capable -- even using imperfect information. We’ll now learn how Carnegie Mellon University and a team of researchers there are producing amazing results with strategic reasoning thanks in part to powerful new memory-intense systems architectures.

Sandholm
To learn more about strategic reasoning advances, please join me in welcoming Tuomas Sandholm, Professor and Director of the Electronic Marketplaces Lab at Carnegie Mellon University in Pittsburgh.

Tuomas Sandholm: Thank you very much.

Gardner: Tell us about strategic reasoning and why imperfect information is often the reality that these systems face?

Sandholm: In strategic reasoning we take the word “strategic” very seriously. It means game theoretic, so in multi-agent settings where you have more than one player, you can't just optimize as if you were the only actor -- because the other players are going to act strategically. What you do affects how they should play, and what they do affects how you should play.

That's what game theory is about. In artificial intelligence (AI), there has been a long history of strategic reasoning. Most AI reasoning -- not all of it, but most of it until about 12 years ago -- was really about perfect information games like Othello, Checkers, Chess and Go.

And there has been tremendous progress. But these complete information, or perfect information, games don't really model real business situations very well. Most business situations are of imperfect information.

Know what you don’t know

So you don't know the other guy's resources, their goals and so on. You then need totally different algorithms for solving these games, or game-theoretic solutions that define what rational play is, or opponent exploitation techniques where you try to find out the opponent's mistakes and learn to exploit them.

So totally different techniques are needed, and this has way more applications in reality than perfect information games have.

Gardner: In business, you don't always know the rules. All the variables are dynamic, and we don't know the rationale or the reasoning behind competitors’ actions. People sometimes are playing offense, defense, or a little of both.

Before we dig in to how is this being applied in business circumstances, explain your proof of concept involving poker. Is it Five-Card Draw?

Heads-Up No-Limit Texas Hold'em has become the leading benchmark in the AI community.
Sandholm: No, we’re working on a much harder poker game called Heads-Up No-Limit Texas Hold'em as the benchmark. This has become the leading benchmark in the AI community for testing these application-independent algorithms for reasoning under imperfect information.

The algorithms have really nothing to do with poker, but we needed a common benchmark, much like the IC chip makers have their benchmarks. We compare progress year-to-year and compare progress across the different research groups around the world. Heads-Up No-limit Texas Hold'em turned out to be great benchmark because it is a huge game of imperfect information.

It has 10 to the 161 different situations that a player can face. That is one followed by 161 zeros. And if you think about that, it’s not only more than the number of atoms in the universe, but even if, for every atom in the universe, you have a whole other universe and count all those atoms in those universes -- it will still be more than that.

Gardner: This is as close to infinity as you can probably get, right?

Sandholm: Ha-ha, basically yes.

Gardner: Okay, so you have this massively complex potential data set. How do you winnow that down, and how rapidly does the algorithmic process and platform learn? I imagine that being reactive, creating a pattern that creates better learning is an important part of it. So tell me about the learning part.

Three part harmony

Sandholm: The learning part always interests people, but it's not really the only part here -- or not even the main part. We basically have three main modules in our architecture. One computes approximations of Nash equilibrium strategies using only the rules of the game as input. In other words, game-theoretic strategies.

That doesn’t take any data as input, just the rules of the game. The second part is during play, refining that strategy. We call that subgame solving.

Then the third part is the learning part, or the self-improvement part. And there, traditionally people have done what’s called opponent modeling and opponent exploitation, where you try to model the opponent or opponents and adjust your strategies so as to take advantage of their weaknesses.

However, when we go against these absolute best human strategies, the best human players in the world, I felt that they don't have that many holes to exploit and they are experts at counter-exploiting. When you start to exploit opponents, you typically open yourself up for exploitation, and we didn't want to take that risk. In the learning part, the third part, we took a totally different approach than traditionally is taken in AI.

We are letting the opponents tell us where the holes are in our strategy. Then, in the background, using supercomputing, we are fixing those holes.
We said, “Okay, we are going to play according to our approximate game-theoretic strategies. However, if we see that the opponents have been able to find some mistakes in our strategy, then we will actually fill those mistakes and compute an even closer approximation to game-theoretic play in those spots.”

One way to think about that is that we are letting the opponents tell us where the holes are in our strategy. Then, in the background, using supercomputing, we are fixing those holes.


HPC from HPE
To Supercomputing and Deep Learning

Gardner: Is this being used in any business settings? It certainly seems like there's potential there for a lot of use cases. Business competition and circumstances seem to have an affinity for what you're describing in the poker use case. Where are you taking this next?

Sandholm: So far this, to my knowledge, has not been used in business. One of the reasons is that we have just reached the superhuman level in January 2017. And, of course, if you think about your strategic reasoning problems, many of them are very important, and you don't want to delegate them to AI just to save time or something like that.

Now that the AI is better at strategic reasoning than humans, that completely shifts things. I believe that in the next few years it will be a necessity to have what I call strategic augmentation. So you can't have just people doing business strategy, negotiation, strategic pricing, and product portfolio optimization.

You are going to have to have better strategic reasoning to support you, and so it becomes a kind of competition. So if your competitors have it, or even if they don't, you better have it because it’s a competitive advantage.

Gardner: So a lot of what we're seeing in AI and machine learning is to find the things that the machines do better and allow the humans to do what they can do even better than machines. Now that you have this new capability with strategic reasoning, where does that demarcation come in a business setting? Where do you think that humans will be still paramount, and where will the machines be a very powerful tool for them?

Human modeling, AI solving

Sandholm: At least in the foreseeable future, I see the demarcation as being modeling versus solving. I think that humans will continue to play a very important role in modeling their strategic situations, just to know everything that is pertinent and deciding what’s not pertinent in the model, and so forth. Then the AI is best at solving the model.

That's the demarcation, at least for the foreseeable future. In the very long run, maybe the AI itself actually can start to do the modeling part as well as it builds a better understanding of the world -- but that is far in the future.

Gardner: Looking back as to what is enabling this, clearly the software and the algorithms and finding the right benchmark, in this case the poker game are essential. But with that large of a data set potential -- probabilities set like you mentioned -- the underlying computersystems must need to keep up. Where are you in terms of the threshold that holds you back? Is this a price issue that holds you back? Is it a performance limit, the amount of time required? What are the limits, the governors to continuing?

Sandholm: It's all of the above, and we are very fortunate that we had access to Bridges; otherwise this wouldn’t have been possible at all.  We spent more than a year and needed about 25 million core hours of computing and 2.6 petabytes of data storage.

This amount is necessary to conduct serious absolute superhuman research in this field -- but it is something very hard for a professor to obtain. We were very fortunate to have that computing at our disposal.

Gardner: Let's examine the commercialization potential of this. You're not only a professor at Carnegie Mellon, you’re a founder and CEO of a few companies. Tell us about your companies and how the research is leading to business benefits.

Superhuman business strategies

Sandholm: Let’s start with Strategic Machine, a brand-new start-up company, all of two months old. It’s already profitable, and we are applying the strategic reasoning technology, which again is application independent, along with the Libratus technology, the Lengpudashi technology, and a host of other technologies that we have exclusively licensed to Strategic Machine. We are doing research and development at Strategic Machine as well, and we are taking these to any application that wants us.

                                                                  HPC from HPE
Overcomes Barriers 
To Supercomputing and Deep Learning

Such applications include business strategy optimization, automated negotiation, and strategic pricing. Typically when people do pricing optimization algorithmically, they assume that either their company is a monopolist or the competitors’ prices are fixed, but obviously neither is typically true.

We are looking at how do you price strategically where you are taking into account the opponent’s strategic response in advance. So you price into the future, instead of just pricing reactively. The same can be done for product portfolio optimization along with pricing.

Let's say you're a car manufacturer and you decide what product portfolio you will offer and at what prices. Well, what you should do depends on what your competitors do and vice versa, but you don’t know that in advance. So again, it’s an imperfect-information game.

Gardner: And these are some of the most difficult problems that businesses face. They have huge billion-dollar investments that they need to line up behind for these types of decisions. Because of that pipeline, by the time they get to a dynamic environment where they can assess -- it's often too late. So having the best strategic reasoning as far in advance as possible is a huge benefit.

If you think about machine learning traditionally, it's about learning from the past. But strategic reasoning is all about figuring out what's going to happen in the future.
Sandholm: Exactly! If you think about machine learning traditionally, it's about learning from the past. But strategic reasoning is all about figuring out what's going to happen in the future. And you can marry these up, of course, where the machine learning gives the strategic reasoning technology prior beliefs, and other information to put into the model.

There are also other applications. For example, cyber security has several applications, such as zero-day vulnerabilities. You can run your custom algorithms and standard algorithms to find them, and what algorithms you should run depends on what the other opposing governments run -- so it is a game.

Similarly, once you find them, how do you play them? Do you report your vulnerabilities to Microsoft? Do you attack with them, or do you stockpile them? Again, your best strategy depends on what all the opponents do, and that's also a very strategic application.

And in upstairs blocks trading, in finance, it’s the same thing: A few players, very big, very strategic.

Gaming your own immune system

The most radical application is something that we are working on currently in the lab where we are doing medical treatment planning using these types of sequential planning techniques. We're actually testing how well one can steer a patient's T-cell population to fight cancers, autoimmune diseases, and infections better by not just using one short treatment plan -- but through sophisticated conditional treatment plans where the adversary is actually your own immune system.

Gardner: Or cancer is your opponent, and you need to beat it?

Sandholm: Yes, that’s right. There are actually two different ways to think about that, and they lead to different algorithms. We have looked at it where the actual disease is the opponent -- but here we are actually looking at how do you steer your own T-cell population.

Gardner: Going back to the technology, we've heard quite a bit from HPE about more memory-driven and edge-driven computing, where the analysis can happen closer to where the data is gathered. Are these advances of any use to you in better strategic reasoning algorithmic processing?

Algorithms at the edge

Sandholm: Yes, absolutely! We actually started running at the PSC on an earlier supercomputer, maybe 10 years ago, which was a shared-memory architecture. And then with Bridges, which is mostly a distributed system, we used distributed algorithms. As we go into the future with shared memory, we could get a lot of speedups.

We have both types of algorithms, so we know that we can run on both architectures. But obviously, the shared-memory, if it can fit our models and the dynamic state of the algorithms, is much faster.

Gardner: So the HPE Machine must be of interest to you: HPE’s advanced concept demonstration model, with a memory-driven architecture, photonics for internal communications, and so forth. Is that a technology you're keeping a keen eye on?


HPC from HPE
Overcomes Barriers 
To Supercomputing and Deep Learning

Sandholm: Yes. That would definitely be a desirable thing for us, but what we really focus on is the algorithms and the AI research. We have been very fortunate in that the PSC and HPE have been able to take care of the hardware side.

We really don’t get involved in the hardware side that much, and I'm looking at it from the outside. I'm trusting that they will continue to build the best hardware and maintain it in the best way -- so that we can focus on the AI research.

Gardner: Of course, you could help supplement the cost of the hardware by playing superhuman poker in places like Las Vegas, and perhaps doing quite well. 
It's unethical to pretend to be a human when you are not. The monetary opportunities in the business applications, are much bigger than what you could hope to make in poker anyway.

Sandholm: Actually here in the live game in Las Vegas they don't allow that type of computational support. On the Internet, AI has become a big problem on gaming sites, and it will become an increasing problem. We don't put our AI in there; it’s against their site rules. Also, I think it's unethical to pretend to be a human when you are not. The business opportunities, the monetary opportunities in the business applications, are much bigger than what you could hope to make in poker anyway.

Gardner: I’m afraid we’ll have to leave it there. We have been learning how Carnegie Mellon University researchers are using strategic reasoning advances and pertaining that to poker as a benchmark -- but clearly with a lot more runway in terms of other business and strategic reasoning benefits.

So a big thank you to our guest, Tuomas Sandholm, Professor at Carnegie Mellon University as well as Director of the Electronic Marketplace Lab there.

Sandholm: Thank you, my pleasure.

Gardner: And a big thank you to our audience as well for joining this BriefingsDirect Voice of the Customer digital transformation success story discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how Carnegie Mellon University researchers are advancing strategic reasoning and machine learning capabilities using high performance computing. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.


You may also be interested in:

Thursday, October 05, 2017

Philips Teams with HPE on Ecosystem Approach to Improve Healthcare Informatics Outcomes

Transcript of a discussion on how an ecosystem approach brings improved healthcare informatics outcomes thanks to using advanced big data and analytics.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories. Please stay with us as we learn how agile businesses are fending off disruption -- in favor of innovation.

Our next business transformation use-case discussion focuses on how an ecosystem approach brings about improved healthcare informatics outcomes. We will now learn how a Philips Healthcare Informatics and Hewlett Packard Enterprise (HPE) partnership creates new solutions for the global healthcare market and provides better health outcomes for patients.

Heemskerk
Here to explain how companies tackle the complexity of solutions delivery in healthcare by using advanced big data and analytics is Martijn Heemskerk, Healthcare Informatics Ecosystem Director for Philips, based in Eindhoven, the Netherlands. Welcome, Martijn.

Martijn Heemskerk: Thank you for having me.

Gardner: Why are partnerships so important in healthcare informatics? Is it because there are clinical considerations combined with big data technology? Why are these types of solutions particularly dependent upon an ecosystem approach?

Partner up

Heemskerk: It’s exactly as you say, Dana. At Philips we are very strong at developing clinical solutions for our customers. But nowadays those solutions also require an IT infrastructure layer underneath to solve the total equation. As such, we are looking for partners in the ecosystem because we at Philips recognize that we cannot do everything alone. We need partners in the ecosystem that can help address the total solution -- or the total value proposition -- for our customers.

Gardner: I'm sure it varies from region to region, but is there a cultural barrier in some regard to bringing cutting-edge IT in particular into healthcare organizations? Or have things progressed to where technology and healthcare converge?
The level of healthcare and the type of solutions that you offer to different countries may vary. But many of the challenges that hospitals everywhere are going through are similar.

Heemskerk: Of course, there are some countries that are more mature than others. Therefore the level of healthcare and the type of solutions that you offer to different countries may vary. But in principle, many of the challenges that hospitals everywhere are going through are similar.

Some of the not-so-mature markets are also trying to leapfrog so that they can deliver different solutions that are up to par with the mature markets.

Gardner: Because we are hearing a lot about big data and edge computing these days, we are seeing the need for analytics at a distributed architecture scale. Please explain how big data changes healthcare.

Big data value add

Heemskerk: What is very interesting for big data is what happens if you combine it with value-based care. It's a very interesting topic. For example, nowadays, a hospital is not reimbursed for every procedure that it does in the hospital – the value is based more on the total outcome of how a patient recovers.

This means that more analytics need to be gathered across different elements of the process chain before reimbursement will take place. In that sense, analytics become very important for hospitals on how to measure on how things are being done efficiently, and determining if the costs are okay.

Gardner: The same data that can used to be more efficient can also be used for better healthcare outcomes and understanding the path of the disease, or for the efficacy of procedures, and so on. A great deal can be gained when data is gathered and used properly.

Heemskerk: That is correct. And you see, indeed, that there is much more data nowadays, and you can utilize it for all kind of different things.

Learn About HPE
That Drive Healthcare and Life Sciences

Gardner: Please help us understand the relationship between your organization and HPE. Where does your part of the value begin and end, and how does HPE fill their role on the technology side?

Healthy hardware relationships 

Heemskerk: HPE has been a highly valued supplier of Philips for quite a long time. We use their technologies for all kinds of different clinical solutions. For example, all of the hardware that we use for our back-end solutions or for advanced visualization is sourced by HPE. I am focusing very much on the commercial side of the game, so to speak, where we are really looking at how can we jointly go to market.

As I said, customers are really looking for one-stop shopping, a complete value proposition, for the challenges that they are facing. That’s why we partner with HPE on a holistic level.

Gardner: Does that involve bringing HPE into certain accounts and vice versa, and then going in to provide larger solutions together?

Heemskerk: Yes, that is exactly the case, indeed. We recognized that we are not so much focusing on problems related to just the clinical implications, and we are not just focusing on the problems that HPE is facing -- the IT infrastructure and the connectivity side of the value chain. Instead, we are really looking at the problems that the C-suite-level healthcare executives are facing.

How do you align all of your processes so that there is a more optimized process flow within the hospitals?
You can think about healthcare industry consolidation, for example, as a big topic. Many hospitals are now moving into a cluster or into a network and that creates all kinds of challenges, both on the clinical application layer, but also on the IT infrastructure. How do you harmonize all of this? How do you standardize all of your different applications? How do you make sure that hospitals are going to be connected? How do you align all of your processes so that there is a more optimized process flow within the hospitals?

By addressing these kinds of questions and jointly going to our customers with HPE, we can improve user experiences for the customers, we can create better services, we have optimized these solutions, and then we can deliver a lot of time savings for the hospitals as well.

Learn About HPE
That Drive Healthcare and Life Sciences

Gardner: We have certainly seen in other industries that if you try IT modernization without including the larger organization -- the people, the process, and the culture -- the results just aren’t as good. It is important to go at modernization and transformation, consolidation of data centers, for example, with that full range of inputs and getting full buy-in.

Who else makes up the ecosystem? It takes more than two players to make an ecosystem.

Heemskerk: Yes, that's very true, indeed. In this, system integrators also have a very important role. They can have an independent view on what would be the best solution to fit a specific hospital.

Of course, we think that the Philips healthcare solutions are quite often the best, jointly focused with the solutions from HPE, but from time to time you can be partnering with different vendors.

Besides that, we don't have all of the clinical applications. By partnering with other vendors in the ecosystem, sometimes you can enhance the solutions that we have to think about; such as 3D solutions and 3D printing solutions.

Gardner: When you do this all correctly, when you leverage and exploit an ecosystem approach, when you cover the bases of technology, finance, culture, and clinical considerations, how much of an impressive improvement can we typically see?

Saving time, money, and people

Heemskerk: We try to look at it customer by customer, but generically what we see is that there are really a lot of savings.

First of all, addressing standardization across the clinical application layer means that a customer doesn't have to spend a lot of money on training all of its hospital employees on different kinds of solutions. So that's already a big savings.

Secondly, by harmonizing and making better effective use of the clinical applications, you can drive the total cost of ownership down.

Thirdly, it means that on the clinical applications layer, there are a lot of efficiency benefits possible. For example, advanced analytics make it possible to reduce the time that clinicians or radiologists are spending on analyzing different kinds of elements, which also creates time savings.

Gardner: Looking more to the future, as technologies improve, as costs go down, as they typically do, as hybrid IT models are utilized and understood better -- where do you see things going next for the healthcare sector when it comes to utilizing technology, utilizing informatics, and improving their overall process and outcomes?

Learn About HPE
That Drive Healthcare and Life Sciences

Heemskerk: What for me would be very interesting is to see is if we can create some kind of a patient-centric data file for each patient. You see that consumers are increasingly engaged in their own health, with all the different devices like Fitbit, Jawbone, Apple Watch, etc. coming up. This is creating a massive amount of data. But there is much more data that you can put into such a patient-centric file, with the chronic diseases information now that people are being monitored much more, and much more often.

If you can have a chronological view of all of the different touch points that the patient has in the hospital, combined with the drugs the patient is using etc., and it’s all in a patient-centric file – it will be very interesting.
If you can have a chronological view of all of the different touch points that the patient has in the hospital, combined with the drugs that the patient is using etc., and you have that all in this patient-centric file -- it will be very interesting. And everything, of course, needs to be interconnected. Therefore, Internet of Things (IoT) technologies will become more important. And as the data is growing, you will have smarter algorithms that can also interpret that data – and so artificial intelligence (AI) will become much more important.

Gardner: I’m afraid we’ll have to leave it there. We have been exploring how an ecosystem approach brings improved healthcare information benefits. And we have also learned how a Philips Healthcare Informatics and Hewlett Packard Enterprise partnership combines forces to take create new solutions in the global healthcare field.

Please join me in thanking our guest, Martijn Heemskerk, Healthcare Informatics Ecosystem Director for Philips, based in Eindhoven, the Netherlands. Thank you, sir.

Heemskerk: Thank you, very much, Dana, for having me.

Gardner: And a big thank you as well to our audience for joining this latest BriefingsDirect Voice of the Customer digital transformation success story. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews. Thanks again for listening, and do please come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile appDownload the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how an ecosystem approach brings improved healthcare informatics outcomes thanks to using advanced big data and analytics. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.

You may also be interested in:



Tuesday, September 26, 2017

Inside Story: How Ormuco Abstracts the Concepts of Private and Public Cloud Across the Globe

Transcript of a discussion on how a Canadian software provider has crafted a standards-based hybrid cloud platform to target global markets using Cloud28+.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories. Stay with us now to learn how agile businesses are fending off disruption -- in favor of innovation.

Our next thought leadership interview explores how a Canadian software provider delivers a hybrid cloud platform for enterprises and service providers alike. We will now learn how Ormuco has identified underserved regions and has crafted a standards-based hybrid cloud platform to allow its users to attain world-class cloud services.

Bayter
Here to help us explore how new breeds of hybrid cloud are coming to more providers around the globe thanks to the Cloud28+ consortium, we welcome Orlando Bayter, CEO and Founder of Ormuco in Montréal. Welcome.

Orlando Bayter: Thank you for having us.

Gardner: We are also here with Xavier Poisson Gouyou Beachamps, Vice President of Worldwide Indirect Digital Services at Hewlett Packard Enterprise (HPE), based in Paris. Welcome, Xavier.

Xavier Poisson Gouyou Beauchamps: Good morning.

Gardner: Let’s begin with this notion of underserved regions. Orlando, why is it that many people think that public cloud is everywhere for everyone when there are many places around the world where it is still immature? What is the opportunity to serve those markets?

Bayter: There are many countries underserved by the hyperscale cloud providers. If you look at Russia, United Arab Emirates (UAE), around the world, they want to comply with regulations on security, on data sovereignty, and they need to have the clouds locally to comply.

Ormuco targets those countries that are underserved by the hyperscale providers and enables service providers and enterprises to consume cloud locally, in ways they can’t do today.

Gardner: Are you allowing them to have a private cloud on-premises as an enterprise? Or do local cloud providers offer a common platform, like yours, so that they get the best of both the private and public hybrid environment?

Best of both clouds

Bayter: That is an excellent question. There are many workloads that cannot leave the firewall of an enterprise. With that, you now need to deliver the economies, ease of use, flexibility, and orchestration of a public cloud experience in the enterprise. At Ormuco, we deliver a platform that provides the best of the two worlds. You are still leaving your data center and you don't need to worry whether it’s on-premises or off-premises.

It's a single pane of glass. You can move the workloads in that global network via established providers throughout the ecosystem of cloud services.
It’s a single pane of glass. You can move the workloads in that global network via established providers throughout the ecosystem of cloud services.

Gardner: What are the attributes of this platform that both your enterprise and service provider customers are looking for? What’s most important to them in this hybrid cloud platform?

Bayter: As I said, there are some workloads that cannot leave the data center. In the past, you couldn’t get the public cloud inside your data center. You could have built a private cloud, but you couldn’t get an Amazon Web Services (AWS)-like solution or a Microsoft Azure-like solution on-premises.

We have been running this now for two years and what we have noticed is that enterprises want to have the ease-of-use, sales, service, and orchestration on-premises. Now, they can connect to a public cloud based on the same platform and they don’t have to worry about how to connect it or how it will work. They just decide where to place this.

They have security, can comply with regulations, and gain control -- plus 40 percent savings compared with VMware, and up to 50 percent to 60 percent compared with AWS.

Gardner: I’m also interested in the openness of the platform. Do they have certain requirements as to the cloud model, such as OpenStack?  What is it that enables this to be classified as a standard cloud?

Bayter: At Ormuco, we went out and checked what are the best solutions and the best platform that we can bring together to build this experience on-premises and off-premises.

We saw OpenStack, we saw Docker, and then we saw how to take, for example, OpenStack and make it like a public cloud solution. So if you look at OpenStack, the way I see it is as concrete, or a foundation. If you want to build a house or a condo on that, you also need the attic. Ormuco builds that software to be able to deliver that cloud look and feel, that self-service, all in open tools, with the same APIs both on private and public clouds.

Learn How Cloud 28+
Of Cloud Service Providers

Gardner: What is it about the HPE platform beneath that that supports you? How has HPE been instrumental in allowing that platform to be built?

Community collaboration

Bayter: HPE has been a great partner. Through Cloud28+ we are able to go to markets in places that HPE has a presence. They basically generate that through marketing, through sales. They were able to bring deals to us and help us grow our business.

From a technology perspective, we are using HPE Synergy. With Synergy, we can provide composability, and we can combine storage and compute into a single platform. Now we go together into a market, we win deals, and we solve the enterprise challenges around security and data sovereignty.

Gardner: Xavier, how is Cloud28+ coming to market, for those who are not familiar with it? Tell us a bit about Cloud28+ and how an organization like Ormuco is a good example of how it works.

Poisson Gouyou Beauchamps
Poisson: Cloud28+ is a community of IT players -- service providers, technology partners, independent software vendors (ISVs), value added resellers, and universities -- that have decided to join forces to enable digital transformation through cloud computing. To do that, we pull our resources together to have a single platform. We are allowing the enterprise to discover and consume cloud services from the different members of Cloud28+.

We launched Cloud28+ officially to the market on December 15, 2016. Today, we have more than 570 members from across the world inside Cloud28+. Roughly 18,000 distributed services may be consumed and we also have system integrators that support the platform. We cover more than 300 data centers from our partners, so we can provide choice.

In fact, we believe our customers need to have that choice. They need to know what is available for them. As an analogy, if you have your smartphone, you can have an app store and do what you want as a consumer. We wanted to do the same and provide the same ease for an enterprise globally anywhere on the planet. We respect diversity and what is happening in every single region.

Ormuco has been one of the first technology partners. Docker is another one. And Intel is another. They have been working together with HPE to really understand the needs of the customer and how we can deliver very quickly a cloud infrastructure to a service provider and to an enterprise in record time. At the same time, they can leverage all the partners from the catalog of content and services, propelled by Cloud28+, from the ISVs.

Global ecosystem, by choice 

Because we are bringing together a global ecosystem, including the resellers, if a service provider builds a project through Cloud28+, with a technology partner like Ormuco, then all the ISVs are included. They can push their services onto the platform, and all the resellers that are part of the ecosystem can convey onto the market what the service providers have been building.

We have a lot of collaboration with Ormuco to help them to design their solutions. Ormuco has been helping us to design what Cloud28+ should be, because it's a continuous improvement approach on Cloud28+ and it’s via collaboration.

If you want to join Cloud28+ to take, don't come. If you want to give, and take a lot afterward, yes, please come, because we all receive a lot.
As I like to say, “If you want to join Cloud28+ to take, don't come. If you want to give, and take a lot afterward, yes, please come, because we all receive a lot.”

Gardner: Orlando, when this all works well, whatdo your end-users gain in terms of business benefits? You mentioned reduction in costs, that's very important, of course. But is there more about your platform from a development perspective and an operational perspective that we can share to encourage people to explore it?

Bayter: So imagine yourself with an ecosystem like Cloud28+. They have 500 members. They have multiple countries, many data centers.

Now imagine that you can have the Ormuco solution on-premises in an enterprise and then be able to burst to a global network of service providers, across all those regions. You get the same performance, you get the same security, and you get the same compliance across all of that.

For an end-customer, you don’t need to think anymore where you’re going to put your applications. They will go to the public cloud, they will go to the private cloud. It is agnostic. You basically place it where you want it to go and decide the economies you want to get. You can compare with the hyperscale providers.

That is the key, you get one platform throughout our ecosystem of partners that can deliver to you that same functionality and experience locally. With a community such as Cloud28+, we can accomplish something that was not possible before.

Gardner: So, just hoping to delineate between the development and then the operations in production. Are you offering the developer an opportunity to develop there and seamlessly deploy, or are you more focused on the deployment after the applications are developed, or both?

Development to deployment 

Bayter: With our solution, same as AWS or Azure allows, a developer can develop their app via APIs, automated, use a database of choice (it could be MySQL, Oracle), and the load balancing and the different features we have in the cloud, whether it’s Kubernetes or Docker, build all that -- and then when the application is ready, you can decide in which region you want to deploy the application.

So you go from development, to deployment technology of your choice, whether it’s Docker or Kubernetes, and then you can deploy to the global network that we’re building on Cloud28+. You can go to any region, and you don’t have to worry about how to get a service provider contract in Russia, or how do I get a contract in Brazil? Who is going to provide me with the service? Now you can get that service locally through a reseller, a distributor, or have an ISV deploythe software worldwide.

Gardner: Xavier, what other sorts of organizations should be aware of the Cloud28+ network?

Learn How Cloud 28+
Of Cloud Service Providers

We accelerate go-to-market for startups, they gain immediate global reach with Cloud28+.
Poisson: We have the technology partners like Ormuco, and we are thankful for what they have brought to the community. We have service providers, of course, software vendors, because you can publish your software in Cloud28+ and provision it on-premises or off-premises. We accelerate go-to-market for startups, they gain immediate global reach with Cloud28+. So to all the ISVs, I say, “Come on, come on guys, we will help you reach out to the market.”

System integrators also, because we see this is an opportunity for the large enterprises and governments with a lot of multi-cloud projects taking care, having requirements for  security. And you know what is happening with security today, it's a hot topic. So people are thinking about how they can have a multi-cloud strategy. System integrators are now turning to Cloud28+ because they find here a reservoir of all the capabilities to find the right solution to answer the right question.

Universities are another kind of member we are working with. Just to explain, we know that all the technologies are created first at the university and then they evolve. All the startups are starting at the university level. So we have some very good partnerships with some universities in several regions in Portugal, Germany, France, and the United States. These universities are designing new projects with members of Cloud28+, to answer questions of the governments, for example, or they are using Cloud28+ to propel the startups into the market.

Ormuco is also helping to change the business model of distribution. So distributors now also are joining Cloud28+. Why? Because a distributor has to make a choice for its consumers. In the past, a distributor had software inventory that they were pushing to the resellers. Now they need to have an inventory of cloud services.

There is more choice. They can purchase hyperscale services, resell, or maybe source to the different members of Cloud28+, according to the country they want to deliver to. Or they can own the platform using the technology of Ormuco, for example, and put that in a white-label model for the reseller to propel it into the market. This is what Azure is doing in Europe, typically. So new kinds of members and models are coming in.

Digital transformation

Lastly, an enterprise can use Cloud28+ to make their digital transformation. If they have services and software, they can become a supplier inside of Cloud28+. They source cloud services inside a platform, do digital transformation, and find a new go-to-market through the ecosystem to propel their offerings onto the global market.

Gardner: Orlando, do you have any examples that you could share with us of a service provider, ISV or enterprise that has white-labeled your software and your capabilities as Xavier has alluded to? That’s a really interesting model.

Bayter: We have been able to go-to-market to countries where Cloud28+ was a tremendous help. If you look at Western Europe, Xavier was just speaking about Microsoft Azure. They chose our platform and we are deploying it in Europe, making it available to the resellers to help them transform their consumption models.

They provide public cloud and they serve many markets. They provide a community cloud for governments and they provide private clouds for enterprises -- all from a single platform.
If you look at the Europe, Middle East and Africa (EMEA) region, we have one of the largest managed service providers. They provide public cloud and they serve many markets. They provide a community cloud for governments and they provide private clouds for enterprises -- all from a single platform.

We also have several of the largest telecoms in Latin America (LATAM) and EMEA. We have a US presence, where we have Managed.com as a provider. So things are going very well and it is largely thanks to what Cloud28+ has done for us.

Gardner: While this consortium is already very powerful, we are also seeing new technologies coming to the market that should further support the model. Such things as HPE New Stack, which is still in the works, HPE Synergy’s composability and auto-bursting, along with security now driven into the firmware and the silicon -- it’s almost as if HPE’s technology roadmap is designed for this very model, or very much in alignment. Tell us how new technology and the Cloud28+ model come together.

Bayter: So HPE New Stack is becoming the control point of multi-cloud. Now what happens when you want to have that same experience off-premises and on-premises? New Stack could connect to Ormuco as a resource provider, even as it connects to other multi-clouds.

With an ecosystem like Cloud28+ all working together, we can connect those hybrid models with service providers to deliver that experience to enterprises across the world.

Learn How Cloud 28+
Of Cloud Service Providers

Gardner: Xavier, anything more in terms of how HPE New Stack and Cloud28+ fit? 

Partnership is top priority

Poisson: It’s a real collaboration. I am very happy with that because I have been working a long time at HPE, and New Stack is a project that has been driven by thinking about the go-to-market at the same time as the technology. It’s a big reward to all the Cloud28+ partners because they are now de facto considered as resource providers for our end-user customers – same as the hyperscale providers, maybe.

At HPE, we say we are in partnership first -- with our partners, or ecosystem, or channel. I believe that what we are doing with Cloud28+, New Stack, and all the other projects that we are describing – this will be the reality around the world. We deliver on-premises for the channel partners.

Gardner: I’m afraid we will have to leave it there. We have been exploring how a Canadian software provider delivers a hybrid cloud platform for enterprises and service providers alike. We have also learned how Cloud28+ offers an ecosystem and network for global distribution of providers like Ormuco. And we certainly heard about the runway to the future for such multi-cloud management capabilities as HPE New Stack.

Please join me in thanking our guests, Xavier Poisson Gouyou Beachamps, Vice President of Worldwide Indirect Digital Services at HPE, based in Paris. Thank you so much, Xavier.

Poisson: Thank you.

Gardner: We have also been here with Orlando Bayter, CEO and Founder at Ormuco in Montréal. Thank you.

Bayter: Thank you.

Gardner: And a big thank you as well to our audience for joining this BriefingsDirect Voice of the Customer digital transformation success story discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile appDownload the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how a Canadian software provider has crafted a standards-based hybrid cloud platform to target global markets using Cloud28+. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.

You may also be interested in: