Showing posts with label Software-defined data center. Show all posts
Showing posts with label Software-defined data center. Show all posts

Tuesday, February 24, 2015

Columbia Sportswear Sets Torrid Pace for Reaping Global Business Benefits From Software-Defined Data Center

Transcript of a BriefingsDirect discussion on how a major sportswear company has leveraged virtualization, SDDC and hybrid cloud to reap substantial business benefits.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to the special BriefingsDirect podcast series coming to you directly from the recent VMworld 2014 Conference. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of BriefingsDirect IT Strategy Discussions.

Gardner
We’re here in San Francisco to explore the latest developments in hybrid cloud computing, end-user computing, and software-defined data center (SDDC).

Our next innovator case study interview focuses on Columbia Sportswear in Portland, Oregon. We're joined by a group from Columbia Sportswear, and we'll learn more about how they've made the journey to SDDC. We'll see how they’ve made great strides in improving their business results through IT, and where they expect to go next with their software-defined efforts.

To learn more, please join me in welcoming our guests, Suzan Pickett, Manager of Global Infrastructure Services at Columbia Sportswear; Tim Melvin, Director of Global Technology Infrastructure at Columbia, and Carlos Tronco, Lead Systems Engineer at Columbia Sportswear. Welcome.

Gardner: People are familiar with your brand, but they might not be familiar with your global breadth. Tell us a little bit about the company, so we appreciate the task ahead of you as IT practitioners.

Pickett: Columbia Sportswear is in its 75th year. We're a leader in global manufacturing of apparel, outdoor accessories, and equipment. We're distributed worldwide and we have infrastructure in 46 locations around the world that we manage today. We're very happy to say that we're 100 percent virtualized on VMware products.

Pickett
Gardner: And those 46 locations, those aren't your retail outlets. That's just the infrastructure that supports your retail. Is that correct?

Pickett: Exactly, our retail footprint in North America is around 110 retail stores today. We're looking to expand that with our joint venture in China over the next few years with Swire, distributor of Columbia Sportswear products.

Gardner: You're clearly a fast-growing organization, and retail itself is a fast-changing industry. There’s lots going on, lots of data to crunch -- gaining more inference about buyer preferences --  and bringing that back into a feedback loop. It’s a very exciting time.

Tell me about the business requirements that you've had that have led you to reinvest and re-energize IT. What are the business issues that are behind that?

Global transformation

Pickett: Columbia Sportswear has been going through a global business transformation. We've been refreshing our enterprise resource planning (ERP). We had a green-field implementation of SAP. We just went live with North America in April of this year, and it was a very successful go-live. We're 100 percent virtualized on VMware products and we're looking to expand that into Asia and Europe as well.

So, with our global business transformation, also comes our consumer experience, on the retail side as well as wholesale. IT is looking to deliver service to the business, so they can become more agile and focused on engineering better products and better design and get that out to the consumer.

Gardner: To be clear, your retail efforts are not just brick and mortar. You're also doing it online and perhaps even now extending into the mobile tier. Any business requirements there that have changed your challenges?

Pickett: Absolutely. We're really pleased to announce, as of summer 2014, that Columbia Sportswear is an AirWatch customer as well. So we get to expand our end-user computing and our VMWare Horizon footprint as well as some of our SDDC strategies.

We're looking at expanding not only our e-commerce and brick-and-mortar, but being able to deliver more mobile platform-agnostic solutions for Columbia Sportswear, and extend that out to not only Columbia employees, but our consumer experience.

Gardner: Let’s hear from Tim about your data center requirements. How does what Suzan told us about your business challenges translate into IT challenges?

https://www.linkedin.com/pub/tim-melvin/1/654/609
Melvin
Melvin: With our business changing and growing as quickly as it is, and with us doing business and selling directly to consumers in more than 100 countries around the world, our data centers have to be adaptable. Our data and our applications have to be secure and available, no matter where we are in the world, whether you're on network or off-premises.

The SDDC has been a game-changer for us. It’s allowed to take those technologies, host them where we need them, and with whatever cost configuration makes sense, whether it’s in the cloud or on-premises, and deliver the solutions that our business needs.

Gardner: Let's do a quick fact-check in terms of where you are in this journey to SDDC. It includes a lot. There are management aspects, network aspects, software-defined storage, and then of course mobile. Does anybody want to give me the report card on where you are in terms of this journey?

100 percent virtualized

Pickett: We're 100 percent virtualized with our compute workloads today. We also have our storage well-defined with virtualized storage. We're working on an early adoption proof of concept (POC) with VMware's NSX for software-defined networking.

It really fills our next step into defining our SDDC, being able to leverage all of our virtual workloads, being able to extend that into the vCloud Air hybrid cloud, and being able to burst our workloads to expand our data centers our toolsets. So we're looking forward to our next step of our journey, which is software-defined networking via NSX.

Gardner: Taking that network plunge, what about the public-cloud options for your hybrid cloud? Do you use multiple public clouds, and what's behind your choice on which public clouds to use?

Melvin: When you look at infrastructure and the choice between on-premise solutions, hybrid clouds, public and private clouds, I don't think it's a choice necessarily of which answer you choose. There isn't one right answer. What’s important for infrastructure professionals is to understand the whole portfolio and understand where to apply your high-power, on-premises equipment and where to use your lower-cost public cloud, because there are trade-offs in each case.
What’s important for infrastructure professionals is to understand the whole portfolio and understand where to apply your high-power, on-premises equipment and where to use your lower-cost public cloud

When we look at our workloads, we try to present the correct tool for the correct job. For instance, for our completely virtualized SAP environment we run that on internal, on-premises equipment. We start to talk about development in a sandbox, and those cases are probably best served in a public cloud, as long as we can secure and automate, just like we can on-site.

Gardner: As you're progressing through SDDC and you're exploring these different options and what works best both technically and economically in a hybrid cloud environment, what are you doing in terms of your data lifecycle. Is there a disaster recovery (DR) element to this? Are you doing warehousing in a different way and disturbing that, or are you centralizing it? I know that analysis of data is super important for retail organizations. Any thoughts about that data component on this overall architecture?

Pickett: Data is really becoming a primary concern for Columbia Sportswear, especially as we get into more analytical situations. Today, we have our two primary data centers in North America, which we do protect with VMWare’s vCenter Site Recovery Manager (SRM), a very robust DR solution.

We're very excited to work with an enterprise-class cloud like vCloud Air that has not only the services that we need to host our systems, but also DR as a service, which we're very interested in pursuing, especially around our remote branch office scenarios. In some of those remote countries, we don't have that protection today, and it will give a little more business continuity or disaster avoidance, as needed.

As we look at data in our data centers, our primary data centers with big data, if you will, and/or enterprise data warehouse strategies, we've started looking at how we're replicating the data where that data lives. We've started getting into active data center scenarios -- active, active.

We're really excited around some of the announcements we've heard recently at VMworld around virtual volumes (VVOLs) and where that’s going to take us in the next couple of years, specifically around vMotion over long-distance. Hopefully, we'll follow the sun, and maybe five years from now, we'll able to move our workloads from North America to Asia and be able to take those workloads and have them follow where the people are using them.

Geographic element

Gardner: That’s really interesting about that geographic element if you're a global company. I haven't heard that from too many other organizations. That’s an interesting concept about moving data and workloads around the world throughout the day.

We've seen some recent VMware news around different types of cloud data offerings, Cloud Object Store for example, and moving to a virtual private cloud on demand. Where do you see the next challenge is in terms of your organization and how do you feel that VMware is setting a goal post for you?
vCloud Air, being an enterprise-class offering, gives us the management capability and allows us to use the same tools that we would use on site.

Tronco: The vCloud Air offerings that we've heard so much about are an exciting innovation.

Public clouds have been available for a long time. There are a lot of places where they make sense, but vCloud Air, being an enterprise-class offering, gives us the management capability and allows us to use the same tools that we would use on-site.

It gives us the control that we need in order to provide a consistent experience to our end-users. I think there is a lot of power there, a lot of capability, and I'm really excited to see where that goes.

Gardner: How about some of the automation issues with the vRealize Suite, such Air Automation. Where do you see the component of managing all this? It becomes more complex when you go hybrid. It becomes, in one sense, more standardized and automated when you go software-defined, but you also have to have your hands on the dials and be able to move things.

https://www.linkedin.com/in/ctronco
Tronco
Tronco: One of the things that we really like about vCloud Air is the fact that we'll be able to use the same tools on-premises and off-premises, and won't have to switch between tools or dashboards. We can manage that infrastructure, whether it's on-premise or in the public cloud, will be able to leverage the efficiencies we have on-premise in vCloud Air as well.

We also can take advantage of some of those new services, like ObjectStore, that might be coming down the road, or even continuous integration (CI) as a service for some of our development teams as we start to get more into a DevOps world.

Customer reactions

Gardner: Let’s tie this back to the business. It's one thing to have a smooth-running, agile IT infrastructure machine. It's great to have an architecture that you feel is ready to take on your tasks, but how do you translate that back to the business? What does it get for you in business terms, and how are you seeing reactions from your business customers?

Pickett: We're really excited to be partnering with the business today. As IT comes out from underground a little bit and starts working more with the business and understanding their requirements -- especially with tools like VMware vRealize Automation, part of the vCloud Suite -- we're now partnering with our development teams to become more agile and help them deliver faster services to the business.

We're working on one of our e-commerce order confirmation toolsets with vRealize Automation, part of the vCloud Suite, and their ability to now package and replicate the work that they're doing rather than reinventing the wheel every time we build out an environment or they need to do a test or a development script.

By partnering with them and enabling them to be more agile, IT wins. We become more services-oriented. Our development teams are winning, because they're delivering faster to the business and the business wins, because now they're able to focus more on the core strategies for Columbia Sportswear.

Gardner: Do you have any examples that you can point to where there's been a time-to-market benefit, a time-to-value faster upgrade of an application, or even a data service that illustrates what you've been able to deliver as a result of your modernization?
Our development teams are winning, because they're delivering faster to the business and the business wins, because now they're able to focus more on the core strategies.

Pickett: Just going back to the toolset that I just mentioned. That was an upgrade process, and we took that opportunity to sit down with our development team and start socializing some of the ideas around VMware vRealize Automation and vCloud Air and being able to extend some of our services to them.

At the same time, our e-commerce teams are going through an upgrade process. So rather than taking weeks or months to deliver this technology to them, we were able to sit down, start working through the process, automate some of those services that they're doing, and start delivering. So, we started with development, worked through the process, and now we have quality assurance and staging and we're delivering product. All this is happening within a week.

So we're really delivering and we're being more agile and more flexible. That’s a very good use case for us internally from an IT standpoint. It's a big win for us, and now we're going to take it the next time we go through an upgrade process.

We've had this big win and now we're going to be looking at other technologies -- Java, .NET, or other solutions -- so that we can deliver and continue the success story that we're having with the business. This is the start of something pretty amazing, bringing development and infrastructure together and mobilizing what Columbia Sportswear is doing internally.

Gardner: Of course, we call it SDDC, but it leads to a much more comprehensive integrated IT function, as you say, extending from development, test, build, operations, cloud, and then sourcing things as required for a data warehouse and applications sets. So finally, in IT, after 30 or 40 years, we really have a unified vision, if you will.

Any thoughts, Tim, on where that unification will lead to even more benefits? Are there ancillary benefits from a virtuous adoption cycle that come to mind from that more holistic whole-greater-than-the-sum-of-the-parts IT approach?

Flexibility and power

Melvin: The closer we get to a complete software-defined infrastructure, the more flexibility and power we have to remove the manual components, the things that we all do a little differently and we can't do consistently.

We have a chance to automate more. We have the chance to provide integrations into other tools, which is actually a big part of why we chose VMware as our platform. They allow such open integration with partners that, as we start to move our workloads more actively into the cloud, we know that we won't get stuck with a particular product or a particular configuration.

The openness will allow us to adapt and change, and that’s just something you don't get with hardware. If it's software-defined, it means that you can control it and you can morph your infrastructure in order to meet your needs, rather than needing to re-buy every time something changes with the business.

Gardner: Of course, we think about not just technology, but people and process. How has all of this impacted your internal IT organization? Are you, in effect, moving people around, changing organizational charts, perhaps getting people doing things that they enjoy more than those manual tasks? Carlos, any thought about the internal impact of this on your human resources issues?

Tronco: Organizationally, we haven’t changed much, but the use of some thing like vRealize Automation allows us to let development teams do some of those tasks that they used to require us to do.

Now, we can do it in an automated fashion. We get consistency. We get the security that we need. We get the audit trail. But we don’t have to have somebody around on a Saturday for two minutes of work spread across eight hours. It also lets those application teams be more agile and do things when they're ready to do them.
We can all leverage the tools and configurations. That's really powerful.

Having that time free lets us do a better job with engineering, look down the road better with a little more clarity, maybe try some other things, and have more time to look at different options for the next thing down the road.

Melvin: Another point there is that, in a fully software-defined infrastructure, while it may not directly translate into organizational changes, it allows you to break down silos. Today, we have operations, system storage, and database teams working together on a common platform that they're all familiar with and they all understand.

We can all leverage the tools and configurations. That's really powerful. When you don't have the network guys sitting off doing things different from what the server guys are doing, you can focus more on comprehensive solutions, and that extends right into the development space, as Carlos mentioned. The next step is to work just as closely with our developers as we do with our peers and infrastructure.

Gardner: It sounds as if you're now also in a position to be more fleet. We all have higher expectations as consumers. When I go to a website or use an application, I expect that I'll see the product that I want, that I can order it, that it gets paid for, and then track it. There is a higher expectation from consumers now.

Is that part of your business payback that you tie into IT? Is there some way that we can define the relationship between that user experience for speed and what you're able to do from a software-defined perspective?

Preventing 'black ops'

Pickett: As an internal service provider for Columbia Sportswear, we can do it better, faster, and cheaper on-premise and with our toolsets from our partners at VMware. This helps prevent black ops situations, for example, where someone is going out to another cloud provider outside the parameters and guidelines from IT.

Today, we're partnering with the business. We're delivering that service. We're doing it at the speed of thought. We're not in a position where we're saying "no," "not yet," or "maybe in a couple of weeks," but "Yes, we can do that for you." So it's a very exciting position to be in that if someone comes to us or if we're reaching out, having conversations about tools, features, or functionality, we're getting a lot of momentum around utilizing those toolsets and then being able to expand our services to the business.

Tronco: Using those tools also allows us to turn around things faster within our development teams, to iterate faster, or to try and experiment on things without a lot of work on our part. They can try some of it, and if it doesn’t work, they can just tear it down.
Today, we're partnering with the business. We're delivering that service. We're doing it at the speed of thought.

Gardner: So you've gone through this journey and you're going to be plunging in deeper with software-defined networking. You have some early-adopter chops here. You guys have been bold and brave.

What advice might you offer to some other organizations that are looking at their data-center architecture and strategy, thinking about the benefits of hybrid cloud, software-defined, and maybe trying to figure out in which order to go about it?

Pickett: I'd recommend that, if you haven’t virtualized your workloads -- to get them virtualized. We're in that no-limit situation. There are no longer restrictions or boundaries around virtualizing your mission-critical or your tier-one workloads. Get it done, so you can start leveraging the portability and the flexibility of that.

Start looking at the next steps, which will be automation, orchestration, provisioning, service catalogs, and extending that into a hybrid-cloud situation, so that you can focus more on what your core offerings are going to be your core strategies. And not necessarily offload, but take advantage of some of those capabilities that you can get in VMware vCloud Air for example, so that you can focus on really more of what’s core to your business.

Gardner: Tim, any words of advice from your perspective?

Melvin: When it comes to solutions in IT, the important thing is to find the value and tie it back to the business. So look for those problems that your business has today, whether it's reducing capital expense through heavy virtualization, whether it's improving security within the data center through NSX and micro-segmentation, or whether it's just providing more flexible infrastructure for your temporary environments like SAN and software development through the cloud.

Find those opportunities and tie it back to a value that the business understands. It’s important to do something with software-defined data centers. It's not a trend and it's not really even a question anymore. It's where we're going. So get moving down that path in whatever way you need to in order to get started. And find those partners, like VMware, that will support you and build those relationships and just get moving.

20/20 hindsight

Gardner: Carlos, advice, thoughts about 20/20 hindsight?

Tronco: As Suzan said, it's focusing on virtualizing the workloads and then being able to leverage some of those other tools like vRealize Automation. Then you're able to free staff up to pursue activities and add more value to the environment and the business, because you're not doing repeatable things manually. You'll get more consistency now that people have time. They're not down because they're doing all these day two, day three operations and things that wear and grate on you.

Gardner: I suppose there's nothing like being responsive to your business constituents. That, then, enables them to seek for more help, which then adds to your value, when we get into that virtuous cycle, rather than a dead end of people not even bothering to ask for help or new and innovative ideas in business.
It’s important to do something with software-defined data centers. It's not a trend and it's not really even a question anymore.

Congratulations. That sounds like a very impactful way to go about IT. We've been learning about how Columbia Sportswear in Portland, Oregon has been adjusting to the software-defined data center strategy and we've heard how that's brought them some business benefits in their fast-paced retail organization worldwide.

So a big thank you to our guests, Suzan Pickett, Manager of Global Infrastructure Services at Columbia Sportswear; Tim Melvin, Director of Global Technology Infrastructure, and Carlos Tronco, Lead Systems Engineer at Columbia Sportswear. Thanks so much.

And a big thank you to our audience for joining us for this special discussion series, coming to you directly from the recent 2014 VMworld Conference in San Francisco.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of VMware-sponsored BriefingsDirect IT discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect discussion on how a major sportswear company has leveraged virtualization, SDDC and hybrid cloud to reap substantial business benefits. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Tuesday, February 17, 2015

California Natural Resources Agency Gains Agility Through Software-Defined Data Center Strategy

Transcript of a BriefingsDirect discussion on how a large state agency harnesses broad virtualization to do more with less in IT while remaining agile and efficient.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to the special BriefingsDirect podcast series coming to you directly from the recent VMworld 2014 Conference. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of BriefingsDirect IT Strategy Discussions.

Gardner
We’re in San Francisco to explore the latest developments in hybrid cloud computing, end-user computing, software-defined data center (SDDC), and virtualization infrastructure management.

Our next innovator case study interview focuses on the California Natural Resources Agency (CNRA) in Sacramento. They have a large purview, overseeing some 25 different agencies. They've set up a SDDC and are deep into the process of maturing its value and utility.

To learn more about how the CNRA gains agility from a SDDC strategy, we welcome Tony Morshed, Chief Technology Officer for the California Resources Data Center in the California Department of Water Resources. Welcome, Tony.

Tony Morshed: Thanks.

Gardner: We’re also here with Michael Hom, Data Center Chief in the IT Infrastructure Services Branch for the California Department of Water Resources. Welcome, Michael.

Michael Hom: Good morning.

Gardner: First, gentlemen, help us understand a little bit about the size of your organization. This is a large state government department, but you're really a department of departments. Help me understand the breadth of your agency.

Morshed: Our department, Water Resources, consists of 3,500 people that are part of the agency that comprises many departments. The bigger ones are Parks and Rec, Cal Fire, and Fish and Wildlife. There are about 28 agencies and conservancies and 25,000 people onboard.

http://www.cold.ca.gov/agency_display.asp?ATRID=WTRRSC
Morshed
Gardner: So in order to support all these people and all these different agencies, a common infrastructure is important, but also it sounds like you need to have differentiation and customization for their specific needs. How do you accomplish both the goal of a common infrastructure for efficiency, but also still be able to meet all your requirements for all those different people?

Morshed: When we started our consolidation effort, we decided to transform ourselves more to an ISP-style setup. We know that most departments have their own IT shops, and you know there is still that trust thing. So we just built the common infrastructure. We let them share the infrastructure, but they have their own security posture. We segregate all their traffic, so each department can still feel like they're autonomous, but yet we all share the infrastructure, in which case we all share the savings.

Mandate to consolidate

Hom: With that, we had a mandate to also consolidate. First, the State of California is really about cost savings. Each of the 25-plus organizations mostly had their own IT shops. By combining the infrastructure as a service (IaaS) in our multitenancy data center, they're able to reap the benefits of cost savings for infrastructure, but also concentrate each department's specific needs and applications.

Hom
Gardner: It sounds as if you have a private-cloud approach, multi-tenancy, and trying to get that elasticity and efficiency going. It also sounds like you’re well on the way to an  SDDC, but you know that means different things to different people. I'd like to get your take on what SDCC means.

Also, are you exploring software-defined storage, software-defined networking, or more on the workload support for the servers. How does it shape up in your mind and in your particular implementation?

Morshed: When the software-defined stuff came out, for me, one of the big things was disaster recovery (DR). If I could stretch my data center into another facility, DR becomes a non-issue, because those workloads can shift between sites without any trouble with automation.

That was the next piece for us -- automation. We realize that we’re part-way there, but to get all the way there, we need to do fuller automation. This means that we need to quit tinkering with the network and storage every time we want to do something new.

Those were big driving factors for going to SDDC. To us, it means that we’re obfuscating the hardware. The hardware’s there. It’s just running. We’re working and tweaking everything at the software layer -- so we could be a lot more agile.

Gardner: Michael, SDDC, how do you define that and to what degree are you into that journey?

Separating the physical

Hom: For the SDDC we really wanted to provide a logical data center to each of our organizations. We wanted to separate the physical, which allows our folks to support more of a logical infrastructure, where they still have autonomy, but the physical layer is basically one and the same.

Today, from a functional point of view, they get what they’ve had before, but without the overhead of physical support. We've used the VMware vSphere and vCloud Suite to provide that software-defined queuing. Right now, we're embarking on a software-defined networking, using VMware NSX and third-party vendors to support that.

We’re looking to use automation soon to help us decrease overhead, and pass those savings on to each of the organizations.

Gardner: Given that you have a common set of infrastructure and, I imagine, lot of common data, are you well into the VMware Virtual SAN adoption for software-defined storage as well? How does that shape up?

Morshed: For Virtual SAN, we're looking at these cases right now and we see some. We’re not quite there yet, and so we’ll focus on the software-defined networking or automation as well. Prior to Virtual SAN, which will probably come later, we’re still evaluating and determining what our needs are in that area.
For me, the greater organizational problems and the structure of your processes become the harder things, because we’re not as used to dealing with those things.

Gardner: It’s complicated, right? There are lot of interdependencies. Certain things happen that ripple across others, and so it’s a crawl-walk-run. Yet the benefits come from a whole greater than the sum of the parts, if I could use a couple of clichés, but it is an interesting time in the business.

What are some of the challenges you have in terms of getting closer to a full SDDC? Are these technology, culture, process, or all of the above?

Morshed: At first it was technology, but the culture and the organizational mindset are the bigger challenge. You can find solutions and work through technology. We're IT people, and we’re used to the technology problems. For me, the greater organizational problems -- and the structure of your processes -- become the harder things, because we’re not as used to dealing with those things.

Gardner: Now, Michael, part of the adoption of a complex, long-term journey on IT is to show results early and get buy-in. Are there instances where you look at that? Maybe, it's the DR, where you can have a sense of better dependability on your resources? Any way to describe what you’ve done early on that has lead to a greater emphasis to adopt more aggressively?

Better service levels

Hom: Definitely. One of the key things with early wins is providing a better service level for provisioning, and that’s something that everybody has been struggling with. With the cloud infrastructure, we've been able to provision within days, if not a day. And that typically beats most of the service tools that each of the organizations have had. So that was an early win.

It’s things like that where we decrease overhead and make IT more accessible for business. That makes it a win, and starts to have the ball rolling as far as other features, such as DR, greater capacity, and things that would be tough for each individual organization to do on their own.

Gardner: Of course, when you do well in a state agency, it’s apparent, right? They're representing services to the public, and you’re the largest US state. You have a large bureaucracy, probably the size of what many countries have. Is there something different about the public sector in terms of accountability or response? How do you have a different set of requirements as a public organization and perhaps enterprises, too?

Morshed: We do have a difference. Our procurement is much harder. Getting people is much harder. We live within a lot of constraints that the private sector doesn’t realize. We have a hard time adjusting our work levels. Can we get more people now? No. It takes forever to get more people, if you can ever get them.
We live within a lot of constraints that the private sector doesn’t realize.

Gardner: So it’s doing more with less over and over again.

Morshed: Constantly doing more with less. Part of this virtualization is survivability. We would never be able to survive or give our business the tools they need to do their business without this. We would just be a sinking ship.

Gardner: So the whole philosophy, Michael, of SDDC and virtualization, is of doing more with less, of automating, of boiling out the manual processes and going to more real-time responsive technology driven infrastructure makes total sense for your organization?

Hom: Definitely. To go with what Tony says, we really don’t have much overhead when we need to respond to future projects. When there's an uptick in activity, there is no way to have more resources available. So we need to build that into our infrastructure to allow for that dynamic bandwidth to happen from a personal level.

Gardner: We’re here of course at VMworld 2014. There’s lot of news going on. Anything in particular piquing your interest, perhaps with the OpenStack support in the EVO Hyper-Converged Infrastructure? What is now on your agenda after hearing some news to reach those goals as you describe them?

EVO looks pretty nice

Morshed: There are a couple of things. EVO looks pretty nice. I was out on the floor and looking at it yesterday and talking with the CIO and I see it as something that we might be able to use for some of our outlying offices, where we have around 100 to 150 people. We can drop something like that in, put virtual desktop infrastructure (VDI) on it, and deliver VDI services to them locally, so they don't have to worry about that traffic going over the wide area network (WAN).

The other piece is the acquisition made recently with CloudVolumes and looking at how we can use that to leverage our VDI structure. We're using another product right now in that space, but again with CloudVolumes it’s been a part of VMworld. It’s more interesting, because we know that the chances of all the software being upgraded and updated in at the same time in interoperability is greater if it’s a VMware product.

For us, it’s been a real struggle to make sure that all the products that we use, interact and as there’s an upgrade, everything upgrades at the same time. To me, those are the two biggest things that I'm getting out of the announcements.
The business could come up with more dollars, but to be able to be more agile and more flexible is where it really pays off.

Gardner: Right and it’s like building a virtuous cycle of adoption benefit because when you do the SDDC built on virtualization that provides a public-cloud benefit. Then, you can start realizing those end-user computing benefits like VDI. So, there’s really this snowball effect.

Is that something that you’ve been able to demonstrate? Do you have any metrics of success that you can point to?

Morshed: We do have some tangible benefits. We have reduced our CAPEX by somewhere around 40 percent and our OPEX around 32 percent. I don’t have the numbers, but we have deployed VDI in the Department of Water Resources and we already virtualized about 600 to 800 desktops. Not only is it helping us save costs there; it’s also used as a strategy for a remote access as a strategy to help protect our server infrastructure by using VDI for admins.

So there are those tangible things that you can reach out and measure and those intangible things, where it’s allowing us to do something easier and more flexible. That, for me, is the bigger win. The business could come up with more dollars, but to be able to be more agile and more flexible is where it really pays off.

Gardner: So we get productivity, we've got DR, which reduces your risk, and we've got some hard savings and economics. It's pretty compelling. Michael, any thoughts about how those fit together, and which ones are more important to you?

More flexible

Hom: Definitely, this allows us to be more flexible, as Tony said, and there are some things that we're trying to do that we would never imagine without a SDDC. So they increased security, greater capacity, capabilities to our business.

Gardner: How about VMware specifically? Is there some differentiator in terms of how they produce this products that has allowed you to follow this journey? Is this more of a partnership than a procurement relationship? It sounds like the track that VMware takes in its strategy very much aligns with yours.

Morshed: It’s very much a partnership. In fact, we basically only want to work with business partners. We don’t want to work with vendors, because we don’t need someone to sell us something and walk away. VMware has been hand-in-hand with us for this whole journey.

When we look at other products for the mix, we look for the deep partners with VMware because we know virtual machines (VMs) are core. So, when we look in the storage partners and we’re looking at networking partner, we’re making sure that those partners are partners of VMware.
We don’t want to work with vendors, because we don’t need someone to sell us something and walk away. VMware has been hand-in-hand with us for this whole journey.

One of the things that we find is the inoperability once everything has been virtualized. Everything has to connect, and it’s not a single stack. So if one thing gets upgraded, we need to make sure that everything across a stack can accept that upgrade. Otherwise, we lose the ability to take the advantage of the upgrade until everybody else catches up.

Early on, we were in that position and we’re doing everything we can to remove ourselves from that position.

Gardner: Michael, any thoughts on the nature of the VMware relationship that you could point to in terms of legacy in an approach that others might want to consider.

Hom: Definitely. We consider VMware a strategic partner. A couple of things that illustrate that is that we’ve been involved with the VMware’s Excellware and Velocity program and that's been two-fold. For the velocity side, we have marked up a fully working SDDC with SX, with virtualized automation, operations in business as a stack.

Gardner: One of the things we've heard here at VMworld is be bold, be brave, be a little bit aggressive. Go out there and do these things. Any thoughts for other organizations that are just dipping their toes into the water? Is it higher risk than reward to be bold and brave in getting early? Or is it perhaps something that allows you to then be a differentiator and be better in your own environment, whether it’s a public or private sector?

Set in stone

Morshed: The first thing is always question what you’ve got that's set in stone, because most of it is not set in stone. We've all heard a lot of things that you can't do. You can’t virtualize Oracle, but you can. You can't do this, you can't do that, you can't get the network focused at the top of the storage. That's all that stuff that you actually can do.

You have to really look at it, peel it back, make sure that "you can't" is an actual thing, and then figure out how to get around it. The way I see it is that, as the world turns, things morph, and if you don’t move into this virtualization space, you're going to be left behind. You're going to be the guy making buggy whips. There are no buggy whips running around. There’s no use for them.

We’re all being asked to do so much more with the same resources or fewer resources. We're all being pushed to keep up with how the demand is going out there. Technology is just jumping, and this is the only way on the infrastructure side to keep up with that.
We’re all being asked to do so much more with the same resources or fewer resources. We're all being pushed to keep up with how the demand is going out there.

Gardner: Michael, any other thoughts in terms of 20/20 hindsight on your experience and why being aggressive and being bold has paid off?

Hom: Virtualization is definitely up and running, at least in state organizations. It’s probably something that we might do that or we might use as a toolset, but from looking at VMware this week, virtualization is the industry standard.

If you don’t take it on, then you really won’t be able to respond to business needs. What happens is that when the official IT organization becomes obsolete, there are going to be ad-hoc IT organizations and those would become the norm. If you want to be relevant, you have to use every tool set that you can to provide the business needs.

Gardner: Very good. I'm afraid we’ll have to leave it there. We've been learning from the California Natural Resources Agency in Sacramento how they’ve been embarking and benefiting from a SDDC strategy. I'd like to thank our guests, Tony Morshed, Chief Technology Officer for the California Natural Resources Data Center in the California Department of Water Resources. Thank you so much, Tony.

Morshed: No problem. It’s been a pleasure.

Gardner: And we've also been joined by Michael Hom, Data Center Chief in the IT infrastructure services branch for the California Department of Water Resources. Thanks so much, Michael.

Hom: Thank you.

Gardner: And also a big thank you to our audience for joining this special podcast series coming to you directly from the recent 2014 VMworld Conference in San Francisco. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of VMware-sponsored BriefingsDirect IT strategy discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect discussion on how a large state agency harnesses broad virtualization to do more with less in IT while remaining agile and efficient. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Monday, September 22, 2014

The Open Group Panel: Internet of Things Poses Opportunities and Obstacles

Transcript of The Open Group podcast, in conjunction with BriefingsDirect, exploring the challenges and ramifications of the Internet of Things, as machines and sensors collect vast amounts of data.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hello, and welcome to a special BriefingsDirect thought leadership interview series coming to you in conjunction with recent The Open Group Boston 2014 on July 21 in Boston.

Gardner
I'm Dana Gardner, principal analyst at Interarbor Solutions, and I'll be your host and moderator throughout these discussions on Open Platform 3.0 and Boundaryless Information Flow.

We're going to now specifically delve into the Internet of Things with a panel of experts. The conference has examined how Open Platform 3.0 leverages the combined impacts of cloud, big data, mobile, and social. But to each of these now we can add a new cresting wave of complexity and scale as we consider the rapid explosion of new devices, sensors, and myriad endpoints that will be connected using internet protocols, standards and architectural frameworks.

This means more data, more cloud connectivity and management, and an additional tier of “things” that are going to be part of the mobile edge -- and extending that mobile edge ever deeper into even our own bodies.

When we think about inputs to these social networks -- that's going to increase as well. Not only will people be tweeting, your device could be very well tweet, too -- using social networks to communicate. Perhaps your toaster will soon be sending you a tweet about your English muffins being ready each morning.

The Internet of Things is more than the “things” – it means a higher order of software platforms. For example, if we are going to operate data centers with new dexterity thanks to software-defined networking (SDN) and storage (SDS) -- indeed the entire data center being software-defined (SDDC) -- then why not a software-defined automobile, or factory floor, or hospital operating room -- or even a software-defined city block or neighborhood?

And so how does this all actually work? Does it easily spin out of control? Or does it remain under proper management and governance? Do we have unknown unknowns about what to expect with this new level of complexity, scale, and volume of input devices?

Will architectures arise that support the numbers involved, interoperability, and provide governance for the Internet of Things -- rather than just letting each type of device do its own thing?

To help answer some of these questions, The Open Group assembled a distinguished panel to explore the practical implications and limits of the Internet of Things. So please join me in welcoming Said Tabet, Chief Technology Officer for Governance, Risk and Compliance Strategy at EMC, and a primary representative to the Industrial Internet Consortium; Penelope Gordon, Emerging Technology Strategist at 1Plug Corporation; Jean-Francois Barsoum, Senior Managing Consultant for Smarter Cities, Water and Transportation at IBM, and Dave Lounsbury, Chief Technical Officer at The Open Group.

Jean-Francois, we have heard about this notion of "cities as platforms," and I think the public sector might offer us some opportunity to look at what is going to happen with the Internet of Things, and then extrapolate from that to understand what might happen in the private sector.

Hypothetically, the public sector has a lot to gain. It doesn't have to go through the same confines of a commercial market development, profit motive, and that sort of thing. Tell us a little bit about what the opportunity is in the public sector for smart cities.

Jean-Francois Barsoum: It's immense. The first thing I want to do is link to something that Marshall Van Alstyne (Professor at Boston University and Researcher at MIT) had talked about, because I was thinking about his way of approaching platforms and thinking about how cities represent an example of that.

Barsoum
You don't have customers; you have citizens. Cities are starting to see themselves as platforms, as ways to communicate with their customers, their citizens, to get information from them and to communicate back to them. But the complexity with cities is that as a good a platform as they could be, they're relatively rigid. They're legislated into existence and what they're responsible for is written into law. It's not really a market.

Chris Harding (Forum Director of The Open Group Open Platform 3.0) earlier mentioned, for example, water and traffic management. Cities could benefit greatly by managing traffic a lot better.

Part of the issue is that you might have a state or provincial government that looks after highways. You might have the central part of the city that looks after arterial networks. You might have a borough that would look after residential streets, and these different platforms end up not talking to each other.

They gather their own data. They put in their own widgets to collect information that concerns them, but do not necessarily share with their neighbor. One of the conditions that Marshall said would favor the emergence of a platform had to do with how much overlap there would be in your constituents and your customers. In this case, there's perfect overlap. It's the same citizen, but they have to carry an Android and an iPhone, despite the fact it is not the best way of dealing with the situation.

The complexities are proportional to the amount of benefit you could get if you could solve them.

Gardner: So more interoperability issues?

Barsoum: Yes.

More hurdles

Gardner: More hurdles, and when you say commensurate, you're saying that the opportunity is huge, but the hurdles are huge and we're not quite sure how this is going to unfold.

Barsoum: That's right.

Gardner: Let's go to an area where the opportunity outstrips the challenge, manufacturing. Said, what is the opportunity for the software-defined factory floor for recognizing huge efficiencies and applying algorithmic benefits to how management occurs across domains of supply-chain, distribution, and logistics. It seems to me that this is a no-brainer. It's such an opportunity that the solution must be found.

Tabet: When it comes to manufacturing, the opportunities are probably much bigger. It's where we can see a lot of progress that has already been done and still work is going on. There are two ways to look at it.

Tabet
One is the internal side of it, where you have improvements of business processes. For example, similar to what Jean-Francois said, in a lot of the larger companies that have factories all around the world, you'll see such improvements on a factory base level. You still have those silos at that level.

Now with this new technology, with this connectedness, those improvements are going to be made across factories, and there's a learning aspect to it in terms of trying to manage that data. In fact, they do a better job. We still have to deal with interoperability, of course, and additional issues that could be jurisdictional, etc.

However, there is that learning that allows them to improve their processes across factories. Maintenance is one of them, as well as creating new products, and connecting better with their customers. We can see a lot of examples in the marketplace. I won't mention names, but there are lots of them out there with the large manufacturers.

Gardner: We've had just-in-time manufacturing and lean processes for quite some time, trying to compress the supply chain and distribution networks, but these haven't necessarily been done through public networks, the internet, or standardized approaches.

But if we're to benefit, we're going to need to be able to be platform companies, not just product companies. How do you go from being a proprietary set of manufacturing protocols and approaches to this wider, standardized interoperability architecture?

Tabet: That's a very good question, because now we're talking about that connection to the customer. With the airline and the jet engine manufacturer, for example, when the plane lands and there has been some monitoring of the activity during the whole flight, at that moment, they'll get that data made available. There could be improvements and maybe solutions available as soon as the plane lands.

Interoperability

That requires interoperability. It requires Platform 3.0 for example. If you don't have open platforms, then you'll deal with the same hurdles in terms of proprietary technologies and integration in a silo-based manner.

Gardner: Penelope, you've been writing about the obstacles to decision-making that might become apparent as big data becomes more prolific and people try to capture all the data about all the processes and analyze it. That's a little bit of a departure from the way we've made decisions in organizations, public and private, in the past.

Of course, one of the bigger tenets of Internet of Things is all this great data that will be available to us from so many different points. Is there a conundrum of some sort? Is there an unknown obstacle for how we, as organizations and individuals, can deal with that data? Is this going to be chaos, or is this going to be all the promises many organizations have led us to believe around big data in the Internet of Things?

Penelope Gordon: It's something that has just been accelerated. This is not a new problem in terms of the decision-making styles not matching the inputs that are being provided into the decision-making process.

Gordon
Former US President Bill Clinton was known for delaying making decisions. He's a head-type decision-maker and so he would always want more data and more data. That just gets into a never-ending loop, because as people collect data for him, there is always more data that you can collect, particularly on the quantitative side. Whereas, if it is distilled down and presented very succinctly and then balanced with the qualitative, that allows intuition to come to fore, and you can make optimal decisions in that fashion.

Conversely, if you have someone who is a heart-type or gut-type decision-maker and you present them with a lot of data, their first response is to ignore the data. It's just too much for them to take in. Then you end up completely going with whatever you feel is correct or whatever you have that instinct that it's the correct decision. If you're talking about strategic decisions, where you're making a decision that's going to influence your direction five years down the road, that could be a very wrong decision to make, a very expensive decision, and as you said, it could be chaos.

It just brings to mind to me Dr. Seuss’s The Cat in the Hat with Thing One and Thing Two. So, as we talk about the Internet of Things, we need to keep in mind that we need to have some sort of structure that we are tying this back to and understanding what are we trying to do with these things.
If you have someone who is a heart-type or gut-type decision-maker and you present them with a lot of data, their first response is to ignore the data.

Gardner: Openness is important, and governance is essential. Then, we can start moving toward higher-order business platform benefits. But, so far, our panel has been a little bit cynical. We've heard that the opportunity and the challenges are commensurate in the public sector and that in manufacturing we're moving into a whole new area of interoperability, when we think about reaching out to customers and having a boundary that is managed between internal processes and external communications.

And we've heard that an overload of data could become a very serious problem and that we might not get benefits from big data through the Internet of Things, but perhaps even stumble and have less quality of decisions.

So Dave Lounsbury of The Open Group, will the same level of standardization work? Do we need a new type of standards approach, a different type of framework, or is this a natural path and course what we have done in the past?

Different level

Dave Lounsbury: We need to look at the problem at a different level than we institutionally think about an interoperability problem. Internet of Things is riding two very powerful waves, one of which is Moore's Law, that these sensors, actuators, and network get smaller and smaller. Now we can put Ethernet in a light switch right, a tag, or something like that.

Lounsbury
Also, Metcalfe's Law that says that the value of all this connectivity goes up with the square of the number of connected points, and that applies to both the connection of the things but more importantly the connection of the data.

The trouble is, as we have said, that there's so much data here. The question is how do you manage it and how do you keep control over it so that you actually get business value from it. That's going to require us to have this new concept of a platform to not only to aggregate, but to just connect the data, aggregate it, correlate it as you said, and present it in ways that people can make decisions however they want.

Also, because of the raw volume, we have to start thinking about machine agency. We have to think about the system actually making the routine decisions or giving advice to the humans who are actually doing it. Those are important parts of the solution beyond just a simple "How do we connect all the stuff together?"

Gardner: We might need a higher order of intelligence, now that we have reached this border of what we can do with our conventional approaches to data, information, and process.

Thinking about where this works best first in order to then understand where it might end up later, I was intrigued again this morning by Professor Van Alstyne. He mentioned that in healthcare, we should expect major battles, that there is a turf element to this, that the organization, entity or even commercial corporation that controls and manages certain types of information and access to that information might have some very serious platform benefits.
The question is how do you manage it and how do you keep control over it so that you actually get business value from it.

The openness element now is something to look at, and I'll come back to the public sector. Is there a degree of openness that we could legislate or regulate to require enough control to prevent the next generation of lock-in, which might not be to a platform to access to data information and endpoints? Where is it in the public sector that we might look to a leadership position to establish needed openness and not just interoperability.

Barsoum: I'm not even sure where to start answering that question. To take healthcare as an example, I certainly didn't write the bible on healthcare IT systems and if someone did write that, I think they really need to publish it quickly.

We have a single-payer system in Canada, and you would think that would be relatively easy to manage. There is one entity that manages paying the doctors, and everybody gets covered the same way. Therefore, the data should be easily shared among all the players and it should be easy for you to go from your doctor, to your oncologist, to whomever, and maybe to your pharmacy, so that everybody has access to this same information.

We don't have that and we're nowhere near having that. If I look to other areas in the public sector, areas where we're beginning to solve the problem are ones where we face a crisis, and so we need to address that crisis rapidly.

Possibility of improvement

In the transportation infrastructure, we're getting to that point where the infrastructure we have just doesn't meet the needs. There's a constraint in terms of money, and we can't put much more money into the structure. Then, there are new technologies that are coming in. Chris had talked about driverless cars earlier. They're essentially throwing a wrench into the works or may be offering the possibility of improvement.

On any given piece of infrastructure, you could fit twice as many driverless cars as cars with human drivers in them. Given that set of circumstances, the governments are going to find they have no choice but to share data in order to be able to manage those. Are there cases where we could go ahead of a crisis in order to manage it? I certainly hope so.

Gardner: How about allowing some of the natural forces of marketplaces, behavior, groups, maybe even chaos theory, where if sufficient openness is maintained there will be some kind of a pattern that will emerge? We need to let this go through its paces, but if we have artificial barriers, that might be thwarted or power could go to places that we would regret later.

Barsoum: I agree. People often focus on structure. So the governance doesn't work. We should find some way to change the governance of transportation. London has done a very good job of that. They've created something called Transport for London that manages everything related to transportation. It doesn't matter if it's taxis, bicycles, pedestrians, boats, cargo trains, or whatever, they manage it.
In the transportation infrastructure, we're getting to that point where the infrastructure we have just doesn't meet the needs.

You could do that, but it requires a lot of political effort. The other way to go about doing it is saying, "I'm not going to mess with the structures. I'm just going to require you to open and share all your data." So, you're creating a new environment where the governance, the structures, don't really matter so much anymore. Everybody shares the same data.

Gardner: Said, to the private sector example of manufacturing, you still want to have a global fabric of manufacturing capabilities. This is requiring many partners to work in concert, but with a vast new amount of data and new potential for efficiency.

How do you expect that openness will emerge in the manufacturing sector? How will interoperability play when you don't have to wait for legislation, but you do need to have cooperation and openness nonetheless?

Tabet: It comes back to the question you asked Dave about standards. I'll just give you some examples. For example, in the automotive industry, there have been some activities in Europe around specific standards for communication.

The Europeans came to the US and started to have discussions, and the Japanese have interest, as well as the Chinese. That shows, because there is a common interest in creating these new models from a business standpoint, that these challenges they have to be dealt with together.

Managing complexity

When we talk about the amounts of data, what we call now big data, and what we are going to see in about five years or so, you can't even imagine. How do we manage that complexity, which is multidimensional? We talked about this sort of platform and then further, that capability and the data that will be there. From that point of view, openness is the only way to go.

There's no way that we can stay away from it and still be able to work in silos in that new environment. There are lots of things that we take for granted today. I invite some of you to go back and read articles from 10 years ago that try to predict the future in technology in the 21st century. Look at your smart phones. Adoption is there, because the business models are there, and we can see that progress moving forward.

Collaboration is a must, because it is a multidimensional level. It's not just manufacturing like jet engines, car manufacturers, or agriculture, where you have very specific areas. They really they have to work with their customers and the customers of their customers.
Adoption is there, because the business models are there, and we can see that progress moving forward.

Gardner: Dave, I have a question for both you and Penelope. I've seen some instances where there has been a cooperative endeavor for accessing data, but then making it available as a service, whether it's an API, a data set, access to a data library, or even analytics applications set. The Ocean Observatories Initiative is one example, where it has created a sensor network across the oceans and have created data that then they make available.

Do you think we expect to see an intermediary organization level that gets between the sensors and the consumers or even controllers of the processes? Is there's a model inherent in that that we might look to -- something like that cooperative data structure that in some ways creates structure and governance, but also allows for freedom? It's sort of an entity that we don't have yet in many organizations or many ecosystems and that needs to evolve.

Lounsbury: We're already seeing that in the marketplace. If you look at the commercial and social Internet of Things area, we're starting to see intermediaries or brokers cropping up that will connect the silo of my android ecosystem to the ecosystem of package tracking or something like that. There are dozens and dozens of these cropping up.

In fact, you now see APIs even into a silo of what you might consider a proprietary system and what people are doing is to to build a layer on top of those APIs that intermediate the data.

This is happening on a point-to-point basis now, but you can easily see the path forward. That's going to expand to large amounts of data that people will share through a third party. I can see this being a whole new emerging market much as what Google did for search. You could see that happening for the Internet of Things.

Gardner: Penelope, do you have any thoughts about how that would work? Is there a mutually assured benefit that would allow people to want to participate and cooperate with that third entity? Should they have governance and rules about good practices, best practices for that intermediary organization? Any thoughts about how data can be managed in this sort of hierarchical model?

Nothing new

Gordon: First, I'll contradict it a little bit. To me, a lot of this is nothing new, particularly coming from a marketing strategy perspective, with business intelligence (BI). Having various types of intermediaries, who are not only collecting the data, but then doing what we call data hygiene, synthesis, and even correlation of the data has been around for a long time.

It was an interesting, when I looked at recent listing of the big-data companies, that some notable companies were excluded from that list -- companies like Nielsen. Nielsen's been collecting data for a long time. Harte-Hanks is another one that collects a tremendous amount of information and sells that to companies.

That leads into the another part of it that I think there's going to be. We're seeing an increasing amount of opportunity that involves taking public sources of data and then providing synthesis on it. What remains to be seen is how much of the output of that is going to be provided for “free”, as opposed to “fee”. We're going to see a lot more companies figuring out creative ways of extracting more value out of data and then charging directly for that, rather than using that as an indirect way of generating traffic.

Gardner: We've seen examples of how this has been in place. Does it scale and does the governance or lack of governance that might be in the market now sustain us through the transition into Platform 3.0 and the Internet of Things.
Having standards is going to increasingly become important, unless we really address a lot of the data illiteracy that we have.

Gordon: That aspect is the lead-on part of “you get what you pay for”. If you're using a free source of data, you don't have any guarantee that it is from authoritative sources of data. Often, what we're getting now is something somebody put it in a blog post, and then that will get referenced elsewhere, but there was nothing to go back to. It's the shaky supply chain for data.

You need to think about the data supply and that is where the governance comes in. Having standards is going to increasingly become important, unless we really address a lot of the data illiteracy that we have. A lot of people do not understand how to analyze data.

One aspect of that is a lot of people expect that we have to do full population surveys, as opposed representative sampling to get much more accurate and much more cost-effective collection of data. That's just one example, and we do need a lot more in governance and standards.

Gardner: What would you like to see changed most in order for the benefits and rewards of the Internet of Things to develop and overcome the drawbacks, the risks, the downside? What, in your opinion, would you like to see happen to make this a positive, rapid outcome? Let's start with you Jean-Francois.

Barsoum: There are things that I have seen cities start to do now. There are couple of examples: Philadelphia is one and Barcelona does this too. Rather than do the typical request for proposal (RFP), where they say, "This is the kind of solution we're looking for, and here are our parameters. Can l you tell us how much it is going to cost to build," they come to you with the problem and they say, "Here is the problem I want to fix. Here are my priorities, and you're at liberty to decide how best to fix the problem, but tell us how much that would cost."

If you do that and you combine it with access to the public data that is available -- if public sector opens up its data -- you end up with a very powerful combination that liberates a lot of creativity. You can create a lot of new business models. We need to see much more of that. That's where I would start.

More education

Tabet: I agree with Jean-Francois on that. What I'd like to add is that I think we need to push the relation a little further. We need more education, to your point earlier, around the data and the capabilities.

We need these platforms that we can leverage a little bit further with the analytics, with machine learning, and with all of these capabilities that are out there. We have to also remember, when we talk about the Internet of Things, it is things talking to each other.

So it is not human-machine communication. Machine-to-machine automation will be further than that, and we need more innovation and more work in this area, particularly more activity from the governments. We've seen that, but it is a little bit frail from that point of view right now.

Gardner: Dave Lounsbury, thoughts about what need to happen in order to keep this on the tracks?
Thank you for mentioning the machine-to-machine part, because there are plenty of projections that show that it's going to be the dominant form of Internet communication, probably within the next four years.

Lounsbury: We've touched on lot of them already. Thank you for mentioning the machine-to-machine part, because there are plenty of projections that show that it's going to be the dominant form of Internet communication, probably within the next four years.

So we need to start thinking of that and moving beyond our traditional models of humans talking through interfaces to set of services. We need to identify the building blocks of capability that you need to manage, not only the information flow and the skilled person that is going to produce it, but also how you manage the machine-to-machine interactions.

Gordon: I'd like to see not so much focus on data management, but focus on what is the data managing and helping us to do. Focusing on the machine-to-machine and the devices is great, but it should be not on the devices or on the machines… it should be on what can they accomplish by communicating; what can you accomplish with the devices and then have a reverse engineer from that.

Gardner: Let's go to some questions from the audience. The first one asks about a high order of intelligence which we mentioned earlier. It could be artificial intelligence, perhaps, but they ask whether that's really the issue. Is the nature of the data substantially different, or we are just creating more of the same, so that it is a storage, plumbing, and processing problem? What, if anything, are we lacking in our current analytics capabilities that are holding us back from exploiting the Internet of Things?

Gordon: I've definitely seen that. That has a lot to do with not setting your decision objectives and your decision criteria ahead of time so that you end up collecting a whole bunch of data, and the important data gets lost in the mix. There is a term "data smog."

Most important

The solution is to figure out, before you go collecting data, what data is most important to you. If you can't collect certain kinds of data that are important to you directly, then think about how to indirectly collect that data and how to get proxies. But don't try to go and collect all the data for that. Narrow in on what is going to be most important and most representative of what you're trying to accomplish.

Gardner: Does anyone want to add to this idea of understanding what current analytics capabilities are lacking, if we have to adopt and absorb the Internet of Things?

Barsoum: There is one element around projection into the future. We've been very good at analyzing historical information to understand what's been happening in the past. We need to become better at projecting into the future, and obviously we've been doing that for some time already.

But so many variables are changing. Just to take the driverless car as an example. We've been collecting data from loop detectors, radar detectors, and even Bluetooth antennas to understand how traffic moves in the city. But we need to think harder about what that means and how we understand the city of tomorrow is going to work. That requires more thinking about the data, a little bit like what Penelope mentioned, how we interpret that, and how we push that out into the future.

Lounsbury: I have to agree with both. It's not about statistics. We can use historical data. It helps with lot of things, but one of the major issues we still deal with today is the question of semantics, the meaning of the data. This goes back to your point, Penelope, around the relevance and the context of that information – how you get what you need when you need it, so you can make the right decisions.
As soon as you talk about interoperability in the health sector, people start wondering where is their data going to go.

Gardner: Our last question from the audience goes back to Jean-Francois’s comments about the Canadian healthcare system. I imagine it applies to almost any healthcare system around the world. But it asks why interoperability is so difficult to achieve, when we have the power of the purse, that is the market. We also supposedly have the power of the legislation and regulation. You would think between one or the other or both that interoperability, because the stakes are so high, would happen. What's holding it up?

Barsoum: There are a couple of reasons. One, in the particular case of healthcare, is privacy, but that is one that you could see going elsewhere. As soon as you talk about interoperability in the health sector, people start wondering where is their data going to go and how accessible is it going to be and to whom.

You need to put a certain number of controls over top of that. What is happening in parallel is that you have people who own some data, who believe they have some power from owning that data, and that they will lose that power if they share it. That can come from doctors, hospitals, anywhere.

So there's a certain amount of change management you have to get beyond. Everybody has to focus on the welfare of the patient. They have to understand that there has to be a priority, but you also have to understand the welfare of the different stakeholders in the system and make sure that you do not forget about them, because if you forget about them they will find some way to slow you down.

Use of an ecosystem

Lounsbury: To me, that's a perfect example of what Marshall Van Alstyne talked about this morning. It's the change from focus on product to a focus on an ecosystem. Healthcare traditionally has been very focused on a doctor providing product to patient, or a caregiver providing a product to a patient. Now, we're actually starting to see that the only way we're able to do this is through use of an ecosystem.

That's a hard transition. It's a business-model transition. I will put in a plug here for The Open Group Healthcare vertical, which is looking at that from architecture perspective. I see that our Forum Director Jason Lee is over here. So if you want to explore that more, please see him.

Gardner: I'm afraid we will have to leave it there. We've been discussing the practical implications of the Internet of Things and how it is now set to add a new dimension to Open Platform 3.0 and Boundaryless Information Flow.
It's the change from focus on product to a focus on an ecosystem.

We've heard how new thinking about interoperability will be needed to extract the value and orchestrate out the chaos with such vast new scales of inputs and a whole new categories of information.

So with that, a big thank you to our guests: Said Tabet, Chief Technology Officer for Governance, Risk and Compliance Strategy at EMC; Penelope Gordon, Emerging Technology Strategist at 1Plug Corp.; Jean-Francois Barsoum, Senior Managing Consultant for Smarter Cities, Water and Transportation at IBM, and Dave Lounsbury, Chief Technology Officer at The Open Group.

This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator throughout these discussions on Open Platform 3.0 and Boundaryless Information Flow at The Open Group Conference, recently held in Boston. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: The Open Group

Transcript of a BriefingsDirect podcast exploring the challenges and ramifications of the Internet of Things, as machines and sensors collect vast amounts of data. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in: