Showing posts with label Internet of things. Show all posts
Showing posts with label Internet of things. Show all posts

Tuesday, August 02, 2016

Infrastructure as Destiny — How Purdue Builds a Support Fabric for Big Data-Enabled IoT

Transcript of a discussion on how Purdue University provides IT as a service, using big data and IoT technologies, to support such worthy goals as student retention analysis.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition to the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT Innovation -- and how it's making an impact on people's lives.

Our next IT infrastructure thought leadership case study explores how Purdue University has created a strategic IT environment to support dynamic workload requirements.

We'll now hear how Purdue extended a research and development support infrastructure to provide a common and increasingly software-defined approach to support myriad types of demands by end users and departments.

To describe how a public university is moving toward IT as a service, please join me in welcoming Gerry McCartney, Chief Information Officer at Purdue University in Indiana. Welcome, Gerry.
Gain Data Insights and Business Value
From the Proliferation
Of IoT Connected Devices and Machines
Gerry McCartney: Thank you, Dana.

Gardner: When you're in the business of IT infrastructure, you almost need to predict the future. How do you close the gap between what you think will be demanded of your infrastructure in a few years and what you need to put in place now?

McCartney: A lot of the job that we do is based on trust and people believing that we can be responsive to situations. The most effective way to show that right now is to respond to people’s issues today. If you can do that effectively, then you can present a case that you can take a forward-looking perspective and satisfy what you and they anticipate to be their needs.

McCartney
I don’t think you can make forward-looking statements credibly, especially to a somewhat cynical group of users, if you're not able to satisfy today’s needs. We refer to that as operational credibility. I don’t like the term operational excellence, but are you credible in what you provide? Do people believe you when you speak?

Gardner: We hear an awful lot about digital disruption in other industries. We see big examples of it in taxi cabs, for example, or hospitality. Is there digital disruption going on at university campuses as well, and how would you describe that?

McCartney: A university you can think of as consisting of three main lines of business, two of which are our core activities, of teaching students, educating students; and then producing new knowledge or doing research. The third is the business of running that business, and how do you do that. A very large infrastructure is built up around that third leg, for a variety of reasons.

But if we look at the first two, research in particular, which is where we started, this concept of the third leg of science has been around for some time now. It used to be just experimentation and theory creations. You create a theory, then you do an experiment with some test tubes or something like this, or grow a crop in the field. Then, you would refine your theory and you would continue in that kind of dyadic mode of just going backward and forward.

Third leg of science

That was all right until we wanted to crash lorries into walls or to fly a probe into the sun. You don’t get to do that a thousand times, because you can’t afford it, or it’s too big or too small. Simulation has now become what we refer to as the third leg of science.

Slightly more than 35 percent of our actual research now uses high-performance computing (HPC) in some key parts of it to produce results, then shape the theory formulation, and the actual experimentation, which obviously still goes on.

Around teaching, we've seen for-profit universities, and we've seen massive open online courses (MOOCs) more recently. There's a strong sense that the current mode of instructional delivery cannot stay the same as it has been for the last hundreds of years and that it’s ripe for reform.

Indeed, my boss at Purdue, Mitch Daniels, would be a clear and vibrant voice in that debate himself. To go back to my earlier comments, our job there is to be able to provide credible alternatives, credible solutions to ideas as they emerge. We still haven’t figured that out collectively as an industry, but that’s something that is in the forefront of a lot of peoples’ minds.

Gardner: Suffice to say that information technology will play a major role in that, whatever it is.

McCartney: It’s hard to imagine a solution that isn’t actually completely dependent upon information technology, for at least its delivery, and maybe for more than that.
Right now, our principal requirement is around research computing, because we have to put the storage close to the compute. That's just a requirement of the technology.

Gardner: So, high-performance computing is a bedrock for the simulations needed in modern research. Has that provided you with a good stepping stone toward more cloud-based, distributed computing-based fabric, and ultimately composable infrastructure-based environments?

McCartney: Indeed it has. I can go back maybe seven or eight years at our place, and we had close to 70 data centers on our campus. And by a data center, I mean a room with at least 200-amp supply, and at least 30 tons of additional cooling, not just a room that happens to have some computers in it. I couldn't possibly count how many of them there are now. Those stand-alone data centers are almost all gone now, thanks to our community cluster program, and the long game is that we probably won't have much hardware on our campus at some point a few years from now.

Right now, our principal requirement is around research computing, because we have to put the storage close to the compute. That's just a requirement of the technology.

In fact, many of our administrative services right now are provided by cloud providers. Our users are completely oblivious to that, but we have no on-premises solution at all. We're not doing travel, expense reimbursement and a variety of back-office things on our campus at all.

That trend is going to continue, and the forcing function there is that I can't spend enough on security to protect all the assets I have. So, rather than spend even more on security and fail to provide that completely secure environment, it's better to go to somebody who can provide that environment.

Data-compute link

Gardner: What sort of an infrastructure software environment do you think will give you that opportunity to make the right choices when you decide on-prem versus cloud, even for those intensive workloads that require a tight data and compute link?

McCartney: The worry for any CIO is that the only thing I have that's mine is my business data. Anything else -- web services, network services -- I can buy from a vendor. What nobody else can provide me are my actual accounts, if you wish to just choose a business term, but that can be research information, instructional information, or just regular bookkeeping information.

When you come into a room of a new solution, you're immediately looking at the exit door. In other words, when I have to leave, how easy, difficult, or expensive is it going to be to extract my information back from the solution?

That drives a huge part of any consideration, whether it's cloud or on-prem or whether it's proprietary or open code solution. When this product dies, the company goes bust, we lose interest in it, or whatever -- how easy, expensive, difficult is it for me to extract my business data back from that environment, because I am going to need to do that?
I'm quite happy for everybody else to knock the bumps out to the road for me, and I'll be happy to drive along it when it’s a six-lane highway.

Gardner: What, at this juncture, meets that requirement in your mind? We've heard a lot recently about container technology, standards for open-source platforms, industry accepted norms for cloud platforms. What do you think reduces your risk at this point?

McCartney: I don't think it's there yet for me. I'm happy to have, relatively speaking, small lines of business. Also, you're dependent then on your network availability and volume. So, I'm quite happy there, because I wasn't the first, and because that's not an important narrative for us as an institution.

I'm quite happy for everybody else to knock the bumps out of the road for me, and I'll be happy to drive along it when it’s a six-lane highway. Right now it's barely paved, and I'll allow other brave souls to go there ahead of me.

Gardner: You mentioned early on in our discussion the word "cynical." Tell me a little bit about the unique requirements in a university environment where you need to provide a common, centrally managed approach to IT for cost and security and manageability, but also see to the unique concerns and requirements of individual stakeholders?

McCartney: All universities are, as they should be, full of self-consciously very smart people who are all convinced they could do a job, any particular job, better than the incumbent is doing it. Having said that, the vast bulk of them have very little interest in anything to do with infrastructure.

The way this plays out is that the central IT group provides the core base that services the network -- the wireless services, base storage, base compute, things like that. As you move to the edge, the things that make a difference at the edge.

Providing the service

In other words, if you have a unique electrical device that you want to plug in to a socket in the wall because you are in paleontology, cell biology, or organic chemistry, that's fine. You don't need your own electricity generating plants to do that. I can provide you with the electricity. You just need the cute device and you can do your business, and everybody is happy.

Whatever the IT equivalent to that is, I want to be the energy supplier. Then, you have your device at the edge that makes a difference for you. You don't have to worry about the electricity working; it's just there. I go back to that phrase "operational credibility." Are we genuinely surprised when the service doesn’t work? That’s what credibility means.

Gardner: So, to me, that really starts to mean IT as a service, not just electricity or compute or storage. It's really the function of IT. Is that in line with your thinking, and how would you best describe IT as a service?

McCartney: I think that's exactly right, Dana. There are two components to this. There's an operational component, which is, are you a credible provider of whatever the institution decides the services are that it needs, lighting, air-conditioning or the IT equivalence of that? They just work. They work at reasonable cost; it's all good. That’s the operational component.

The difference with IT, as opposed to other infrastructure components, is that IT has itself the capability to transform entire processes. That’s not true of other infrastructure things. I can take an IT process and completely reengineer something that's important to me, using advantages that the technology gives me.
Gain Data Insights and Business Value
From the Proliferation
Of IoT Connected Devices and Machines
For example, I might be concerned about student performance in particular programs. I can use geo-location data about their movement. I can use network activity. I can use a variety of other resources available to me to help in the guidance of those students on what’s good behavior and what’s helpful behavior to an outcome that they want. You can’t do that with an air-conditioning system.

IT has that capability to reinvent itself and reinvent entire processes. You mentioned some of them the way that things like Uber has entirely disrupted the taxi industry. I’d say the same thing here.

There's one part of the CIO’s job that’s operational; does everything work? The second part is, if we're in transition period to a new business model, how involved are the IT leaders in your group in that discussion? It's not just can we do this with IT or not, but it’s more can a CIO and the CIO’s staff bring an imagination to the conversation, that is a different perspective than other voices in the organization? That's true of any industry or line of business.

Are you merely there as a handmaiden waiting to be told what to do, or are you an active partner in the conversation? Are you a business partner? I know that’s a phrase people like to use. There's a kind of a great divide there.

Gardner: I can see where IT is a disruptor -- and it’s also a solution to the disruptor, but that solution might further disrupt things. So, it's really an interesting period. Tell me a little bit more about this concept of student retention using new technologies -- geolocation for example -- as well as big data which has become more available at much lower cost. You might even think of analytics as a service as another component of IT as a service.

How impactful will that be on how you can manage your campus, not only for student retention, but perhaps for other aspects of a smarter intelligent campus opportunity? [See related post, Nottingham Trent University Elevates Big Data’s Role to Improving Student Retention in Higher Education.]

Personalized attention

McCartney: One of the great attractions of small educational institutions is that you get a lot of personalized attention. The constraint of a small institution is that you have very little choice. There's a small number of faculty, and they simply can’t offer the options and different concentrations that you get in a large institution.

In a large institution, you have the exact opposite problem. You have many, many choices, perhaps even too many subjects that, as a 19-year-old, you've never even heard of. Perhaps you get less individualized attention and you fill that gap by taking advice from students who went to your high school a year before, who are people in your residence hall, or people you bump into on the street. The knowledge that you acquire there is accidental, opportunistic, and not structured in any way around you as an individual, but it’s better than nothing.

There are advisors, of course, and there are people, but you don't know these individuals. You have to go and form relationships with them and they have to understand you and you have to understand them.

A big-data opportunity here is to be able to look at the students at some level of individuality. "Look, this is your past, this is what you have done, this is what you think, and this is the behavior that we are not sure you're engaging in right now. Have you thought about this path, have you thought about this kind of behavior for yourself?"
One of the great attractions of small educational institutions is that you get a lot of personalized attention. The constraint of a small institution is that you have very little choice.

A well-established principle in student services is that the best indicator of student success is how engaged they are in the institution. There are many surrogate measures of that, like whether they participate in clubs. Do they go home every weekend, indicating they are not really engaged, that they haven’t made that transition?

Independent of your academic ability, your SAT scores, and your GPA that you got in high school, for students that engage, that behavior is highly correlated with success and good outcomes, the outcomes everybody wants.

As an institution, how do you advise or counsel. They'll say perhaps there's nothing here they're interested in, and that can be a problem with a small institution. It's very intimate. Everybody says, "Dana, we can see you're not having a great time. Would you like to join the chess club or the drafts club?" And you say, "Well, I was looking for the Legion of Doom Club, and you don’t seem to have one here."

Well, you go to a large institution, they probably have two of those things, but how would you find it and how would you even know to look for that? How would you discover new things that you didn't even know you liked, because the high school you went to didn't teach applied engineering or a whole pile of other things, for that matter.

Gardner: It’s interesting when you look at it that way. The student retention equation is, in a business sense, the equivalent of user experience, personalization, engagement, share of wallet, those sorts of metrics.

We have the opportunity now, probably for the first time, to use big data, Internet of Things (IoT), and analytics to measure, predict, and intercede at a behavioral level. So in this case, to make somebody a productive member of society at a capacity they might miss and you only have one or two chances at that, seems like a rather monumental opportunity.

Effective path

McCartney: You’re exactly right, Dana. I'm not sure I like the equivalence with a customer, but I get the point that you're making there. What you're trying to do is to genuinely help students discover an effective path for themselves and learn that. You can learn it randomly, and that's nice. We don't want to create this kind of railroad track. Well, you're here; you’ve got to end up over there. That’s not helpful either.

My own experience, and I don’t know about other people listening to this, is that you have remarkably little information when you're making these choices at 19 and 20. Usually, if you were getting direction, it was from somebody who had a plan for you that was more based on their experience of life, some 20 or 30 years previously than on your experience of life.

So where big data can be a very effective play here, was to say, "Look, here are people that look like you, and here were the choices they've made. You might find some of these choices interesting. If you might, then here’s how you’d go about exploring that."

As you rightly say, and implicitly suggested, there is a concern with the high costs, especially of residential education, right now. The most wasteful expenditures there are is where you do a year or two to find out you shouldn't have ever been in this program, you have no love for this thing, you have no affinity for it.
What you're trying to do is to genuinely help students discover an effective path for themselves and learn that. You can learn it randomly, and that's nice.

The sooner you can find that out for yourself and make a conscious choice the better. We see big data having a very active role in that because one of the great advantages of being in a large institution is that we have tens of thousands of students over many years. We know what those outcomes look like, and we know different choices that different people have made. Yes, you can be the first person to make a brand new choice, and good for you if you are.

Gardner: Well it’s an interesting way of looking at big data that has a major societal benefit in the offing. It also provides predictability and tools for people in ways they hadn’t had before. So, I think it’s very commendable.

Before we sign-off, what comes next – high performance computing (HPC), fabric cloud, IT-as-a service -- is there another chapter on this journey that perhaps you have a bead on that that we’re not aware of?

McCartney: Oh my goodness, yes. We have an event now that I started three years ago called "Dawn or Doom," in which if technology is a forcing function, if it is. We're not even going to assert that definitely. Are we reaching a point of a new nirvana, a new human paradise where we’ve resolved all major social problems, and health problems or have we created some new seventh circle of hell where it’s actually an unmitigated disaster for almost everybody; if not everybody? Is this the end of life as we know it? We create robots that are superior to us in every way and we become just some intermediate form of life that has reached the end of its cycle.

This is an annual event that's free and open. Anybody who wants to come is very welcome to attend. You can Google "Dawn or Doom Purdue." We look at it from all different perspectives. So, we have obviously engineers and computer scientists, but we have psychologists, we have labor economists. What about the future of work? If nobody has a job, is that a blessing or a curse?

Psychologists, philosophers, what does it mean, what does artificial intelligence mean, what does a self-conscious machine mean? Currently, of course, we have things like food security we worry about. And the Zika virus -- are we spawning a whole new set of viruses we have no cure for? Have we reached the end of the effectiveness of antibiotics or not?

These are all incredibly interesting questions I would think any intelligent person would want to at least probe around, and we've had some significant success with that.

Next event

Gardner: When is the next Dawn or Doom event, and where will it be?

McCartney: It would be in West Lafayette, Indiana, on October 3 and 4. We have a number of external high-profile key note speakers, then we have a passel of Purdue faculty. So, you will find something that entertain even the most arcane of interests. [For more on Dawn or Doom, see the book, Dawn or Doom: The Risks and Rewards of Emerging Technologies.]

Gardner: I am afraid we will have to leave it there. We've been learning about how Purdue University has created strategic IT environment to support dynamic workload requirements, and we have also heard how Purdue is providing a common fabric for IT as a service to support such worthy goals as student retention analysis, using big data and the IoT technologies.
Gain Data Insights and Business Value
From the Proliferation
Of IoT Connected Devices and Machines
So, please join me in thanking our guest. We've been delighted to be here with Gerry McCartney, Chief Information Officer at Purdue University in Indiana. Thank you, Gerry.

McCartney: Thank you, Dana.

Gardner: And I'd also like to thank our audience as well for joining us for this Hewlett-Packard Enterprise Voice of the Customer Podcast. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how Purdue University provides IT as a service, using big data and IoT technologies, to support such worthy goals as student retention analysis. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Tuesday, March 08, 2016

IoT Plus Big Data Analytics Translate into Better Services Management at Auckland Transport

Transcript of a discussion on the impact and experience of using Internet of Things technologies together with big data analysis in a regional public enterprise.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the HPE Discover business transformation series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Our next top innovator case study discussion explores the impact and experience of using Internet of Things (IoT) technologies together with big data analysis to better control and manage a burgeoning transportation agency in New Zealand.

To hear more about how fast big data supports rapidly-evolving demand for different types of sensor outputs -- and massive information inputs -- please join me in welcoming our guest, Roger Jones, CTO for Auckland Transport in Auckland, New Zealand. Welcome, Roger.

Roger Jones: Thank you.

Gardner: Tell us about your organization, its scope, its size and what you're doing for the people in Auckland.
Start Your
HPE Vertica
Community Edition Trial Now
Jones: Auckland Transport was formed five years ago -- we just celebrated our fifth birthday -- from an amalgamation of six regional councils. All the transport functions were merged along with the city functions, to form a super-city concept, of which transport was pulled out and set up as a council-controlled organization.

But it's a semi-government organization as well. So we get funded by the government and the ratepayer and then we get our income as well.

We have multiple stakeholders. We're run by a board, an independent board, as a commercial company.

We look after everything to do with transport in the city: All the roads, everything on the roads, light poles, rubbish bins, the maintenance of the roads and the footpaths and the grass bins, boarding lights, and public transport. We run and operate the ferries, buses and trains, and we also promote and manage cycling across the city, walking activities, commercial vehicle planning, how they operate across the ports and carry their cargoes, and also carpooling schemes.

Gardner: Well, that's a very large, broad set of services and activities. Of course a lot of people in IT are worried about keeping the trains running on time as an analogy, but you're literally doing that.

Real-time systems

Jones: Yeah. We have got a lot of real-time systems, and trains. We've just brought in a whole new electric train fleet. So all of the technology that goes with that has to be worked through. That's the real-time systems on the platforms, right through to how we put Wi-Fi on to those trains and get data off those trains.

Jones
So all of those trains have closed-circuit television (CCTV) cameras on them for safety. It's how you get all that information off and analyze it. There's about a terabyte of data that comes off all of those trains every month. It's a lot of data to go through and work out what you need to keep and what you don’t.

Gardner: Of course, you can't manage and organize things unless you can measure and keep track of them. In addition to that terabyte you talked about from the trains, what's the size of the data -- and not just data as we understand it, unstructured data, but content -- that you're dealing with across all these other activities?

Jones: Our traditional data warehouse is about three terabytes, in round numbers, and on the CCTV we take about eight petabytes of data a week, and that's what we're analyzing. That's from about 1,800 cameras that are out on the streets. They're in a variety of places, mostly on intersections, and they're doing a number of functions.

They're counting vehicles. Under the new role, what we want to do is count pedestrians and cyclists and have the cyclists activate the traffic lights. From a cycle-safety perspective, the new carbon fiber bikes don’t activate the magnetic loops in the roads. That's a bone of contention -- they can’t get the lights to change. We'll change all that using CCTV analytics and promote that.

But we'll also be able to count vehicles that turn right and where they go in the city through number plate recognition. By storing that, when a vehicle comes into the city, we would be able to see if they traveled through the city and their average length of stay.

What we're currently working on is putting in a new parking system, where we'll collect all the data about the occupancy of parking spaces and be able to work out, in real time, the probability of getting a car parked in a certain street, at a certain time. Then, we'll be able to make that available to the customer, and especially the tradesman, who need to be able to park to do their business.

Gardner: Very interesting. We've heard a lot about smart cities and bringing intelligence to bear on some of these problems and issues. It sounds like you're really doing that. In order for you to fulfill that mission, what was lacking in your IT infrastructure? What did you need to change, either in architecture or an ability to scale or adapt to these different types of inputs?

Merged councils

Jones: The key driver was, having merged five councils. We had five different CCTV systems, for instance, watched by people manually. If you think about 1,800 cameras being monitored by maybe three staff at a time, it’s very obvious that they can’t see actually what’s happening in real time, and most of the public safety events were being missed. The cameras were being used for reactive investigation rather than active management of a problem at this point in time.

That drove us into what do we were doing around CCTV, the analytics, and how we automate that and make it easy for operators to be presented with, in real-time, here is the situation you need to manage now, and be able to be proactive, and that was the key driver.
There’s a mix of technologies out there, lots and lots of technologies. One of the considerations was which partner we should go with.

When we looked at that and at all the other scenes that are around the city we asked how we put that all together, process it in real time, and be able to make it available again, both to ourselves, to the police, to the emergency services, and to other third-party application developers who can board their own applications using that data. It’s no value if it’s historic.

Gardner: So, a proverbial Tower of Babel. How did you solve this problem in order to bring those analytics to the people who can then make good use of it and in a time frame where it can be actionable?

Jones: We did a scan, as most IT shops would do, around what could and couldn’t be done. There’s a mix of technologies out there, lots and lots of technologies. One of the considerations was which partner we should go with. Which one was going to give us longevity of product and association, because you could buy a product today, and in the changing world of IT, it’s out of business, being bought out, or it’s changed in three years time. We needed a brand that was going to be in there for the long haul.
Start Your
HPE Vertica
Community Edition Trial Now
Part of that was the brand, and there are multiple big brands out there. Did they have the breadth of the toolsets that we were looking for, both from a hardware perspective, managing the hardware, and the application perspective? That’s where we selected Hewlett Packard Enterprise (HPE), taking all of those factors into account.

Gardner: Tell us a bit about what you're doing with data. On the front end, you're using a high-speed approach, perhaps in a warehouse, you're using something that will scale and allow for analytics to take place more quickly. Tell us about the tiering and the network and what you've been able to do with that?

Jones: What we've done is taken a tiered approach. For instance, the analytics on the CCTV comes in and gets processed by the HPE IDOL engine. That strips most of it out. We integrate that into an incident management system, which is also running on the IDOL engine.

Then, we take the statistics and the pieces that we want to keep and we're storing that in HPE Vertica. The parking system will go into HPE Vertica because it’s near real-time processing of significant volumes.

The traditional data warehouse, which was a SQL data warehouse, it’s still very valid today, and it will be valid tomorrow. That’s where we're putting in a lot of the corporate information and tying a lot of the statistical information together so that we have all the historic information around real time, which was always in an old data warehouse.

Combining information

We tie that together with our financials. A lot of smaller changing datasets are held in that data warehouse. Then, we combine that information with the stuff in Vertica and the Microsoft Analytics Platform System (APS) appliances to get us an integrated reporting at the front end in real time.

We're making a lot of that information available through an API manager, so that whatever we do internally is just a service that we can pick up and reuse or make available to whoever we want to make it available to. It’s not all public, but some of it is to our partners and our stakeholders. It’s a platform that can manage that.

Gardner: You mentioned that APS appliance, a Microsoft and HPE collaboration. That’s to help you with that real-time streaming, high velocity, high volume data, and then you have your warehouse. Where are these being run? Do you have a private cloud? Do you have managed hosting, public cloud? Where are the workloads actually being supported?

Jones: The key workloads around the CCTV, the IDOL engine, and Vertica are all are running on HPE kit on our premises, but managed by HPE-Critical Watch. That’s an HPE, almost an end-to-end service, but it just happens to be on our facilities. The rest is again on our facilities.
So we have a huge performance increase. That means that by the time the operators come in, they have yesterday’s information and they can make the right business decisions.

The problem in New Zealand is that there aren't many private clouds that can be used by government agencies. We can’t offshore it because of latency issues and the cost of shipping data to and from the cloud from the ISPs, who know how to charge on international bandwidth.

Gardner: Now that you've put your large set of services together, what are some of the paybacks that you've been able to get? How do you get a return on investment (ROI), which must be pretty sizable to get this infrastructure in place? What are you able to bring back to the public service benefits by having this intelligence, by being able to react in real time?

Jones: There are two bits to this. The traditional data warehouse was bottle-necked. If you take, from an internal business perspective, the processing out of our integrated feed system, which was a batch-driven system, the processing window each night is around 4.5 hours. To process the batch file was just over that.

We were actually running into not getting the batch file processed until about 6 a.m. At that time, the service operators, the bus operators, the ferry operators have already started work for the day. So they weren’t getting yesterday’s information in time to analyze what to do today.

Using the Microsoft APS appliance we've cut that down, and that process now takes about two hours, end-to-end. So we have a huge performance increase. That means that by the time the operators come in, they have yesterday’s information and they can make the right business decisions.

Customer experience

On the public front, I'd put it back to the customer experience. If you go into a car park and have an incident with somebody in the car park, your expectation is that somebody would be monitoring that and somebody will come to your help. Under the old system that was not the case. It would be pure coincidence if that happened.

Under the new scenario, from a public perception, that will be alerted, something will happen, and someone will come to you. So the public safety is a huge step increased. That has no financial ROI directly for us. It has across the medical spectrum and the broader community spectrum, but for us as a transport agency, it has no true ROI, except for customer expectations and perceptions.

Gardner: Well, as taxpayers having expectations met, it's probably a very strong attribute for you. When we look at your architecture, it strikes me that this is probably something more people will be looking to do, because of this IoT trend, where more sensors are picking up more data. It’s data that’s coming in, maybe in the form of a video feed across many different domains or modes. It needs to be dealt with rapidly. What do you see from your experience that might benefit others as they consider how to deal with this IoT architectural challenge?
When you start streaming data in real-time at those volumes, it impacts your data networks. Suddenly your data networks become swamped, or potentially swamped, with large volumes of data.

Jones: We had some key learning from this. That’s a very good point. IoT is all about connecting in devices. When we went from the old CCTV systems to a new one, we didn’t actually understand that some of that data was being aggregated and lost forever at the front end, and what was being received at the back end was only a snippet.

When you start streaming data in real-time at those volumes, it impacts your data networks. Suddenly your data networks become swamped, or potentially swamped, with large volumes of data.

That then drove us to thinking about how to put that through a firewall, and the reality is you can’t. The firewalls aren’t built to handle that. We're running F5’s and we looked at that and they would not have run the volume of CCTV through that.

So then you start driving to other things about how you secure your data, how you secure the endpoints, and tools like looking down your networks so that you understand what’s connected or what’s changed at the connection end, what’s changing in the traffic patterns on your network, become essential to an organization like us, because there is no way we can secure all the endpoints.

Now, a set of traffic lights has a full data connection at the end. If someone opens a cabinet and plugs in a PC, how do you know that they have done that, and that’s what we have got to protect against. The only way to do that is to know that something abnormal is there. It’s not the normal traffic coming from that area of the network, and then we're flagging it and blocking it off. That’s where we are hitting because that’s the only way we can see the IoT working from a security perspective.

Gardner: Now Roger, when you put this amount of data to work, when you've solved some of those networking issues and you have this growing database and historical record of what takes place, that can also be very valuable. Do you expect that you'll be analyzing this data over historical time periods, looking for trends and applying that to feedback loops where you can refine and find productivity benefits? How does this grow over time in value for you as a public-service organization?

Integrated system

Jones: The first real payback for us has been the integrated ticketing system. We run a tag on-tag off electronic system. For the first time, we understand where people are traveling to and from, the times of day they're traveling, and to a certain extent, the demographics of those travelers. We know if they're a child, a pensioner, a student, or just a normal adult type user.

For the first time, we're actually understanding, not only just where people get on, but where they get off and the time. We can now start to tailor our messaging, especially for transport. For instance, if we have a special event, a rugby game or a pop concert, which may only be of interest to a certain segment of the population, we know where to put our advertising or our messaging about the transport options for that. We can now tailor that to the stops where people are there at the right time of day.
We could never do that before, but from a planning perspective, we now have a view of who travels across town, who travels in and out of the city, how often, how many times a day.

We could never do that before, but from a planning perspective, we now have a view of who travels across town, who travels in and out of the city, how often, how many times a day. We've never ever had that. The planners have never had that. When we get the parking information coming in about the parking occupancy, that’s a new set of data that we have never had.

This is very much about the planners having reliable information. And if we go through the license plate reading, we'll be able to see where trucks come into the city and where they go through.

One of our big issues at the moment is that we have got a link route that goes into the port for the trucks. It's a motorway. How many of the trucks use that versus how many trucks take the shortcut straight through the middle of the city? We don’t know that, and we can do ad-hoc surveys, but we'll hit that in real time constantly, forever, and the planners can then use that when they are planning the heavy transport options.

Gardner: I’m afraid we will have to leave it there. We have been learning about how big data, modern networks, and a tiered architectural approach has helped a transportation agency in New Zealand improve its public safety, its reaction to traffic and other congestion issues, and also set in place a historic record to help it improve its overall transportation capabilities.

So I'd like to thank our guest, Roger Jones, CTO for Auckland Transport in Auckland, New Zealand. Thank you, Roger.
Start Your
HPE Vertica
Community Edition Trial Now
Jones: Thanks very much.

Gardner: And thank you, too, to our audience for joining us for this Hewlett Packard Enterprise transformation and innovation interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on the impact and experience of using Internet of Things technologies together with big data analysis in a regional public enterprise. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

 You may also be interested in



  • Extreme Apps approach to analysis makes on-site retail experience king again
  • How New York Genome Center Manages the Massive Data Generated from DNA Sequencing


  • Microsoft sets stage for an automated hybrid cloud future with Azure Stack Technical Preview
  • The Open Group president, Steve Nunn, on the inaugural TOGAF User Group and new role of EA in business transformation
  • Learn how SKYPAD and HPE Vertica enable luxury brands to gain rapid insight into consumer trends
  • Procurement in 2016—The supply chain goes digital
  • Redmonk analysts on best navigating the tricky path to DevOps adoption
  • DevOps by design--A practical guide to effectively ushering DevOps into any organization
  • Need for Fast Analytics in Healthcare Spurs Sogeti Converged Solutions Partnership Model
  • HPE's composable infrastructure sets stage for hybrid market brokering role
  • Nottingham Trent University Elevates Big Data's role to Improving Student Retention in Higher Education
  • Forrester analyst Kurt Bittner on the inevitability of DevOps
  • Agile on fire: IT enters the new era of 'continuous' everything
  • Big data enables top user experiences and extreme personalization for Intuit TurboTax
  • Monday, November 09, 2015

    Internet of Things Brings On Development Demands That DevOps Manages, Say Experts

    Transcript of a BriefingsDirect discussion on how continuous processes around development and deployment of applications impact and benefit the Internet of Things trend.

    Listen to the podcast. Find it on iTunesGet the mobile app. Download the transcript. Watch for Free: DevOps, Catalyst of the Agile Enterprise. Sponsor: Hewlett Packard Enterprise.

    Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives.
    Gardner

    Our next DevOps thought leadership discussion explores how continuous processes around the development and deployment of applications are both impacted by -- and a benefit to -- the Internet of Things (IoT) trend. [Watch for Free: DevOps, Catalyst of the Agile Enterprise.]

    To help better understand the relationship between DevOps and a plethora of new end-devices and data please welcome Gary Gruver, consultant, author and a former IT executive who has led many large-scale IT transformation projects. Welcome, Gary.

    Gary Gruver: Thank you. It’s nice to be here.

    Gardner: We're also here with John Jeremiah, Technology Evangelist at Hewlett Packard Enterprise (HPE). He's on Twitter at @j_jeremiah. Welcome, John.
    Learn how DevOps solutions unify development and operations
    To accelerate business innovation
    John Jeremiah: Hi, Dana. Thanks.

    Gardner: Let’s talk about how the DevOps trend extends not to just traditional enterprise IT and software applications, but to a much larger set of applications -- those in the embedded space, mobile, and end-devices of all sorts. Gary, why is DevOps even more important when you have so many different moving parts as we expect in IoT?

    Gruver: In software development, everybody needs to be more productive. Software is no longer just on websites and in IT departments. It’s going on everywhere in the industry. It’s gone to every product in every place, and being able to differentiate your product with software is becoming more and more important to everybody.

    Gardner: John, from your perspective, is there a sense that DevOps is more impactful, more powerful when we apply it to IoT?

    Jeremiah
    Jeremiah: The reality is it that IoT is moving as fast as mobile is -- and even faster. If you don’t have the ability to change your software to evolve -- to iterate as there is new business innovation -- you're not going to be able to keep up to be competitive. So IoT is going to require a DevOps approach in order to be successful.

    Gardner: In the past, we've had a separate development organization and approach to embedded devices. Do we need to still to do that, or can we combine traditional enterprise software with DevOps and apply the same systems architecture and technologies to all sorts of development?

    Software principles

    Gruver: The principles of being able to keep your code base more "releasable," to work under a prioritized backlog, to work through the process of adding automated testing, and frequent feedback to the developers so that they get better at it -- this all applies.

    Gruver
    Therefore, for embedded systems you are going to need to develop simulators and emulators for automated testing. A simulator is a representation of the final product that can be run on a server. As much as possible, you want to be able to create a simulator that represents the software characteristics of the final product. You can then use this and trust it to find defects, because the amount of automated testing you are going to need to be running to transform your businesses is huge. If you don’t have an affordable place like a server farm to run that, it just doesn’t work. [Watch for Free: DevOps, Catalyst of the Agile Enterprise.]

    If you have custom ASICs in the product, you're also going to need to create an emulator to test the low-level firmware interacting with the ASIC. This is similar to the simulator, but also includes the custom ASIC and electronics from the final product. I see way too many organizations that are embedded and are trying to transform their process giving up on using simulators and emulators because they're not finding the defects that they want to. Yet they haven’t invested in making them robust so they can be effective.

    One of first things I talk about to people that have embedded systems is that you’re not going to be successful transforming your business until you create simulators and emulators that you can trust as a test environment to find defects.

    Gardner: How about working as developers and testers with more of an operations mentality?

    Gruver: At HPE and HP, we were running 15,000 hours of testing on the code base every day. When it was manual, we couldn’t do that and we really couldn’t transform our business until we fundamentally put that level of automated testing in place.

    For laser printer testing, there's no way we would have been able to have enough paper to run that many hours of testing, and we would have worn out printers. There weren’t enough trees in Idaho to make enough paper to do that testing on the final product. Therefore, we needed to create a test farm of simulators and emulators to drive testing upstream as much as possible to get rapid feedback to our developers.

    Gardner: Tell us how DevOps helped in the firmware project for HP printers, and how that illustrates where DevOps and embedded development come together?

    No new features

    Gruver: I had an opportunity to take over leading the LaserJet FW for our organization several years ago. It had been the bottleneck for the organization for two decades. We couldn’t add a new product or plans without checking the firmware, and we had given up asking for new features.

    Then, when 2008 hit, and we were forced to cut our spending, as a of lot of people out in the industry at that time. We could no longer invest to spend our way out of problems. So we had to engineer our solution.
    Discover how to use big data platforms
    To unlock value of Internet of Things
    We were fundamentally looking for anything that we could do to improve productivity. We went on a journey of what I would call applying Agile and DevOps principles at scale, as opposed to trying to scale small teams in the organization. We went through this process of continually trying to improve with a group of 400-800 engineers and working through that process. At the end of three years, firmware was no longer the bottleneck.

    We had gone from five percent of our capacity going to innovation to 40 percent and we were supporting 1.5 times more products. So we took something that was a bottleneck for the business, completely unleashed that capability, and fundamentally transformed the business.
    IoT is going to move so fast that nobody knows exactly what they need and what the capabilities are.
    The details are captured in my first book, A Practical Approach to Large-Scale Agile Development. It’s available at all your finest bookstores. [Also see Gary's newest book, Leading the Transformation: Applying Agile and DevOps Principles at Scale.]

    Gardner: And how does this provide a harbinger of things to come? What you’ve done with firmware at HP and Laser Printers several years ago, how does that paint a picture of how DevOps can be powerful and beneficial in the larger IoT environment?

    Gruver: Well, IoT is going to move so fast that nobody knows exactly what they need and what the capabilities are. It's the ability to move fast. At HP and HPE, we went 2-3 times faster than we ever thought possible. What you're seeing in DevOps is that the unicorns of the world are showing that software development can go much faster than anybody ever thought was possible before.

    That’s going to be much more important as you're trying to understand how this market evolves, what capabilities customers want, and where they want them in IoT. The companies that can move fast and respond to the feedback from the customers are going to be the ones that win. [Watch for Free: DevOps, Catalyst of the Agile Enterprise.]

    Gardner: John, we've seen sort of a dip in the complexity around mobile devices in particular when people consolidated around iOS and Android after having hit many targets, at least for a software platform, in the past. That may have given people a sense of solace or complacency that they can develop mobile applications rapidly.

    But we are now getting, to Gary's point, to a place where we don't really know what sort of endpoints we're going to be dealing with. We're looking at automated cars, houses, drones, appliances, and even sensors within our own bodies.

    What are some of the core principles we need to keep in mind to allow for the rapid and continuous development processes for IoT to improve, but without stumbling again as we hit complexity when it comes to new targets?

    New technologies

    Jeremiah: One of the first things that you're going to have to do is embrace service virtualization and strategies in order to quickly virtualize new technologies and to be able to quickly simulate those technologies when they come to life. We don't know exactly what they're going to be, but we have to be able to embrace that and to bring that into our process and methodology.

    And as Gary was talking about earlier, the strategies of going fast that apply in firmware, apply in the enterprise as well about building automated testing, failing as fast as you can, and learning as you go. As we see complexity increase, the real key is going to be able to harness that, and use virtualization as strategy to move that forward.

    Gardner: Any other metrics of success? How do we know we're succeeding with DevOps? We talked about speed. We talked about testing early and often. How do you know you're doing this well? For organizations that want to have a good way to demonstrate success, how do they present that?

    Gruver: I wouldn't just start off by trying to do DevOps. If you're going to transform your software development processes, the only reason you would go through that much turmoil is because your current development processes aren't meeting the needs of your business. Start off with how your current development processes aren't meeting your business needs.

    The executives are in a best position to clarify exactly this gap and get the organization going down a continuous improvement process to improve the development and delivery processes.
    Most organizations will quickly find that DevOps has some key tools in the toolbox that they want to start using immediately to start take some inefficiencies out of the development process.
    Most organizations will quickly find that DevOps has some key tools in the toolbox that they want to start using immediately to start take some inefficiencies out of the development process.

    But don't go off to do DevOps and measure how well you did it. We're all business executives. We run businesses, we manage businesses, and we need to focus on what the business is trying to achieve and just use the tools that will best help that.

    Gardner: Where do we go next? DevOps has become a fairly popular concept now. It's getting a lot of attention. People understand that it can have a very positive impact, but getting it in place isn't always easy. There are a lot of different spinning variables -- culture, organization, management. In an enterprise that's looking to expand in the internet of things, perhaps they're not doing that level of development and deployment.

    They probably have been a bit more focused on enterprise applications, rather than devices and embedded. How do you start up that capability and do it well within a software development organization? Let's look at moving from traditional development to the IoT development. What should we be keeping in mind?

    Gruver: There are two approaches. One is, if you have loosely coupled architectures like most unicorns do, then you can empower the teams, add some operational members, and let figure it out. Most large enterprise organizations have more tightly coupled architectures that require large numbers of people working together to develop and deliver things together. I don't think those transformations are going to be effective until you find inspired executives who are willing to lead the transformation and work through the process.

    Successful transformations

    I've led a couple of successful transformations. If you look at examples from the DevOps Enterprise Summit that Gene Kim led, the common thing that you saw in most of those is that the organizations that were making progress had an executive that was leading the charge, rallying the troops, and making that happen. It requires coordinating work across a large number of teams, and you need somebody who can look across the value chain and muster the resources to make the technical and the cultural changes. [Read a recent interview with Kim on DevOps and security.]

    Where a lot of my passion lies now, and the reason I wrote my second book is, that I don't think there are a lot of resources for the executives to learn how to transform large organizations. So I tried to capture everything that I knew about how best to do that.

    My second book, Leading the Transformation: Applying Agile and DevOps Principles at Scale, is a resource that enables people to go faster in the organization. I think that’s the next key launch point -- getting the executives engaged to lead that change. That’s going to be the key to getting the adoption going much better. [Watch for Free: DevOps, Catalyst of the Agile Enterprise.]

    Gardner: John, what about skills? It’s one thing to get the top-down buy-in, and it’s one thing to recognize the need for transformation and put in some of the organizational building blocks. But ultimately you need to be have the right people with the right skills.

    Any thoughts about how IoT will create demand for a certain set of skills and how well we're in a position to train and find those people?

    Jeremiah: IoT requires people to embrace skills and understand much broader than their narrow silo. They'll need to develop an expertise in what they do, but they have to have the relationships. They have to have the ability to work across the organization to learn. One of the skills is constantly learning as they go. As Gary mentioned earlier, it’s not a "done" for DevOps. It’s a journey of learning. It’s a journey of growing and getting better.

    Then, as they apply their skills, they're focusing on how they deliver business value. That’s really the change.
    Skills such as understanding process and understanding how things are working so you can continuously improve them is a skill that a lot of times people don’t bring to the table. They know their piece, but they don’t often think about the bigger picture. So it’s a set of skills. It’s beyond a single technology. It's understanding that that they are really not in IT -- they're really a part of the business. I love the way Gary said that earlier, and I agree with him. Seeing themselves as part of the business is a different mindset that they have to have as they go to work.

    Then, as they apply their skills, they're focusing on how they deliver business value. That’s really the change.

    Gardner: How do you do DevOps effectively when you're outsourcing a good part of your development? You may need to do that to find the skills.

    For embedded systems, for example, you might look to an outside shop that has special experience in that particular area, but you may still want to get DevOps. How does that work?

    Gruver: I think DevOps is key to making outsourcing work, especially if you have different vendors that you're outsourcing to because it forces coordination of the work on a frequent basis. Continuous integration, automated testing, and continuous deployment are the forcing functions that align the organization with working code across the system.

    When you're enabling people to go off and work on separate branches and separate issues and you have an integration cycle late in the process, that’s where you get the dysfunction -- with a bunch of different organizations coming together with stuff that doesn’t work. If you force that to happen on a daily, or multiple times a day, basis, you get that system aligned and working well before people spend time and energy working on something that either don’t work together or won’t work well in production. [Watch for Free: DevOps, Catalyst of the Agile Enterprise.]

    Gardner: We have been exploring how continuous processes around development and deployment of applications impact and benefit the Internet of Things trend. I'd like to thank our guests, Gary Gruver, consultant, author and a former IT executive who has led many large-scale IT transformation projects, and John Jeremiah, Technology Evangelist at Hewlett Packard Enterprise on Twitter at @j_jeremiah.
    Learn how DevOps solutions unify development and operations
    To accelerate business innovation
    And I'd also like to extend a big thank you to our audience for joining us for this DevOps and Internet of Things innovation discussion. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.

    Listen to the podcast. Find it on iTunesGet the mobile app. Download the transcript. Watch for Free: DevOps, Catalyst of the Agile Enterprise. Sponsor: Hewlett Packard Enterprise.

    Transcript of a BriefingsDirect discussion on how continuous processes around development and deployment of applications impact and benefit the Internet of Things trend. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

    You may also be interested in: