Showing posts with label HPE IDOL. Show all posts
Showing posts with label HPE IDOL. Show all posts

Tuesday, March 08, 2016

IoT Plus Big Data Analytics Translate into Better Services Management at Auckland Transport

Transcript of a discussion on the impact and experience of using Internet of Things technologies together with big data analysis in a regional public enterprise.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the HPE Discover business transformation series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Our next top innovator case study discussion explores the impact and experience of using Internet of Things (IoT) technologies together with big data analysis to better control and manage a burgeoning transportation agency in New Zealand.

To hear more about how fast big data supports rapidly-evolving demand for different types of sensor outputs -- and massive information inputs -- please join me in welcoming our guest, Roger Jones, CTO for Auckland Transport in Auckland, New Zealand. Welcome, Roger.

Roger Jones: Thank you.

Gardner: Tell us about your organization, its scope, its size and what you're doing for the people in Auckland.
Start Your
HPE Vertica
Community Edition Trial Now
Jones: Auckland Transport was formed five years ago -- we just celebrated our fifth birthday -- from an amalgamation of six regional councils. All the transport functions were merged along with the city functions, to form a super-city concept, of which transport was pulled out and set up as a council-controlled organization.

But it's a semi-government organization as well. So we get funded by the government and the ratepayer and then we get our income as well.

We have multiple stakeholders. We're run by a board, an independent board, as a commercial company.

We look after everything to do with transport in the city: All the roads, everything on the roads, light poles, rubbish bins, the maintenance of the roads and the footpaths and the grass bins, boarding lights, and public transport. We run and operate the ferries, buses and trains, and we also promote and manage cycling across the city, walking activities, commercial vehicle planning, how they operate across the ports and carry their cargoes, and also carpooling schemes.

Gardner: Well, that's a very large, broad set of services and activities. Of course a lot of people in IT are worried about keeping the trains running on time as an analogy, but you're literally doing that.

Real-time systems

Jones: Yeah. We have got a lot of real-time systems, and trains. We've just brought in a whole new electric train fleet. So all of the technology that goes with that has to be worked through. That's the real-time systems on the platforms, right through to how we put Wi-Fi on to those trains and get data off those trains.

Jones
So all of those trains have closed-circuit television (CCTV) cameras on them for safety. It's how you get all that information off and analyze it. There's about a terabyte of data that comes off all of those trains every month. It's a lot of data to go through and work out what you need to keep and what you don’t.

Gardner: Of course, you can't manage and organize things unless you can measure and keep track of them. In addition to that terabyte you talked about from the trains, what's the size of the data -- and not just data as we understand it, unstructured data, but content -- that you're dealing with across all these other activities?

Jones: Our traditional data warehouse is about three terabytes, in round numbers, and on the CCTV we take about eight petabytes of data a week, and that's what we're analyzing. That's from about 1,800 cameras that are out on the streets. They're in a variety of places, mostly on intersections, and they're doing a number of functions.

They're counting vehicles. Under the new role, what we want to do is count pedestrians and cyclists and have the cyclists activate the traffic lights. From a cycle-safety perspective, the new carbon fiber bikes don’t activate the magnetic loops in the roads. That's a bone of contention -- they can’t get the lights to change. We'll change all that using CCTV analytics and promote that.

But we'll also be able to count vehicles that turn right and where they go in the city through number plate recognition. By storing that, when a vehicle comes into the city, we would be able to see if they traveled through the city and their average length of stay.

What we're currently working on is putting in a new parking system, where we'll collect all the data about the occupancy of parking spaces and be able to work out, in real time, the probability of getting a car parked in a certain street, at a certain time. Then, we'll be able to make that available to the customer, and especially the tradesman, who need to be able to park to do their business.

Gardner: Very interesting. We've heard a lot about smart cities and bringing intelligence to bear on some of these problems and issues. It sounds like you're really doing that. In order for you to fulfill that mission, what was lacking in your IT infrastructure? What did you need to change, either in architecture or an ability to scale or adapt to these different types of inputs?

Merged councils

Jones: The key driver was, having merged five councils. We had five different CCTV systems, for instance, watched by people manually. If you think about 1,800 cameras being monitored by maybe three staff at a time, it’s very obvious that they can’t see actually what’s happening in real time, and most of the public safety events were being missed. The cameras were being used for reactive investigation rather than active management of a problem at this point in time.

That drove us into what do we were doing around CCTV, the analytics, and how we automate that and make it easy for operators to be presented with, in real-time, here is the situation you need to manage now, and be able to be proactive, and that was the key driver.
There’s a mix of technologies out there, lots and lots of technologies. One of the considerations was which partner we should go with.

When we looked at that and at all the other scenes that are around the city we asked how we put that all together, process it in real time, and be able to make it available again, both to ourselves, to the police, to the emergency services, and to other third-party application developers who can board their own applications using that data. It’s no value if it’s historic.

Gardner: So, a proverbial Tower of Babel. How did you solve this problem in order to bring those analytics to the people who can then make good use of it and in a time frame where it can be actionable?

Jones: We did a scan, as most IT shops would do, around what could and couldn’t be done. There’s a mix of technologies out there, lots and lots of technologies. One of the considerations was which partner we should go with. Which one was going to give us longevity of product and association, because you could buy a product today, and in the changing world of IT, it’s out of business, being bought out, or it’s changed in three years time. We needed a brand that was going to be in there for the long haul.
Start Your
HPE Vertica
Community Edition Trial Now
Part of that was the brand, and there are multiple big brands out there. Did they have the breadth of the toolsets that we were looking for, both from a hardware perspective, managing the hardware, and the application perspective? That’s where we selected Hewlett Packard Enterprise (HPE), taking all of those factors into account.

Gardner: Tell us a bit about what you're doing with data. On the front end, you're using a high-speed approach, perhaps in a warehouse, you're using something that will scale and allow for analytics to take place more quickly. Tell us about the tiering and the network and what you've been able to do with that?

Jones: What we've done is taken a tiered approach. For instance, the analytics on the CCTV comes in and gets processed by the HPE IDOL engine. That strips most of it out. We integrate that into an incident management system, which is also running on the IDOL engine.

Then, we take the statistics and the pieces that we want to keep and we're storing that in HPE Vertica. The parking system will go into HPE Vertica because it’s near real-time processing of significant volumes.

The traditional data warehouse, which was a SQL data warehouse, it’s still very valid today, and it will be valid tomorrow. That’s where we're putting in a lot of the corporate information and tying a lot of the statistical information together so that we have all the historic information around real time, which was always in an old data warehouse.

Combining information

We tie that together with our financials. A lot of smaller changing datasets are held in that data warehouse. Then, we combine that information with the stuff in Vertica and the Microsoft Analytics Platform System (APS) appliances to get us an integrated reporting at the front end in real time.

We're making a lot of that information available through an API manager, so that whatever we do internally is just a service that we can pick up and reuse or make available to whoever we want to make it available to. It’s not all public, but some of it is to our partners and our stakeholders. It’s a platform that can manage that.

Gardner: You mentioned that APS appliance, a Microsoft and HPE collaboration. That’s to help you with that real-time streaming, high velocity, high volume data, and then you have your warehouse. Where are these being run? Do you have a private cloud? Do you have managed hosting, public cloud? Where are the workloads actually being supported?

Jones: The key workloads around the CCTV, the IDOL engine, and Vertica are all are running on HPE kit on our premises, but managed by HPE-Critical Watch. That’s an HPE, almost an end-to-end service, but it just happens to be on our facilities. The rest is again on our facilities.
So we have a huge performance increase. That means that by the time the operators come in, they have yesterday’s information and they can make the right business decisions.

The problem in New Zealand is that there aren't many private clouds that can be used by government agencies. We can’t offshore it because of latency issues and the cost of shipping data to and from the cloud from the ISPs, who know how to charge on international bandwidth.

Gardner: Now that you've put your large set of services together, what are some of the paybacks that you've been able to get? How do you get a return on investment (ROI), which must be pretty sizable to get this infrastructure in place? What are you able to bring back to the public service benefits by having this intelligence, by being able to react in real time?

Jones: There are two bits to this. The traditional data warehouse was bottle-necked. If you take, from an internal business perspective, the processing out of our integrated feed system, which was a batch-driven system, the processing window each night is around 4.5 hours. To process the batch file was just over that.

We were actually running into not getting the batch file processed until about 6 a.m. At that time, the service operators, the bus operators, the ferry operators have already started work for the day. So they weren’t getting yesterday’s information in time to analyze what to do today.

Using the Microsoft APS appliance we've cut that down, and that process now takes about two hours, end-to-end. So we have a huge performance increase. That means that by the time the operators come in, they have yesterday’s information and they can make the right business decisions.

Customer experience

On the public front, I'd put it back to the customer experience. If you go into a car park and have an incident with somebody in the car park, your expectation is that somebody would be monitoring that and somebody will come to your help. Under the old system that was not the case. It would be pure coincidence if that happened.

Under the new scenario, from a public perception, that will be alerted, something will happen, and someone will come to you. So the public safety is a huge step increased. That has no financial ROI directly for us. It has across the medical spectrum and the broader community spectrum, but for us as a transport agency, it has no true ROI, except for customer expectations and perceptions.

Gardner: Well, as taxpayers having expectations met, it's probably a very strong attribute for you. When we look at your architecture, it strikes me that this is probably something more people will be looking to do, because of this IoT trend, where more sensors are picking up more data. It’s data that’s coming in, maybe in the form of a video feed across many different domains or modes. It needs to be dealt with rapidly. What do you see from your experience that might benefit others as they consider how to deal with this IoT architectural challenge?
When you start streaming data in real-time at those volumes, it impacts your data networks. Suddenly your data networks become swamped, or potentially swamped, with large volumes of data.

Jones: We had some key learning from this. That’s a very good point. IoT is all about connecting in devices. When we went from the old CCTV systems to a new one, we didn’t actually understand that some of that data was being aggregated and lost forever at the front end, and what was being received at the back end was only a snippet.

When you start streaming data in real-time at those volumes, it impacts your data networks. Suddenly your data networks become swamped, or potentially swamped, with large volumes of data.

That then drove us to thinking about how to put that through a firewall, and the reality is you can’t. The firewalls aren’t built to handle that. We're running F5’s and we looked at that and they would not have run the volume of CCTV through that.

So then you start driving to other things about how you secure your data, how you secure the endpoints, and tools like looking down your networks so that you understand what’s connected or what’s changed at the connection end, what’s changing in the traffic patterns on your network, become essential to an organization like us, because there is no way we can secure all the endpoints.

Now, a set of traffic lights has a full data connection at the end. If someone opens a cabinet and plugs in a PC, how do you know that they have done that, and that’s what we have got to protect against. The only way to do that is to know that something abnormal is there. It’s not the normal traffic coming from that area of the network, and then we're flagging it and blocking it off. That’s where we are hitting because that’s the only way we can see the IoT working from a security perspective.

Gardner: Now Roger, when you put this amount of data to work, when you've solved some of those networking issues and you have this growing database and historical record of what takes place, that can also be very valuable. Do you expect that you'll be analyzing this data over historical time periods, looking for trends and applying that to feedback loops where you can refine and find productivity benefits? How does this grow over time in value for you as a public-service organization?

Integrated system

Jones: The first real payback for us has been the integrated ticketing system. We run a tag on-tag off electronic system. For the first time, we understand where people are traveling to and from, the times of day they're traveling, and to a certain extent, the demographics of those travelers. We know if they're a child, a pensioner, a student, or just a normal adult type user.

For the first time, we're actually understanding, not only just where people get on, but where they get off and the time. We can now start to tailor our messaging, especially for transport. For instance, if we have a special event, a rugby game or a pop concert, which may only be of interest to a certain segment of the population, we know where to put our advertising or our messaging about the transport options for that. We can now tailor that to the stops where people are there at the right time of day.
We could never do that before, but from a planning perspective, we now have a view of who travels across town, who travels in and out of the city, how often, how many times a day.

We could never do that before, but from a planning perspective, we now have a view of who travels across town, who travels in and out of the city, how often, how many times a day. We've never ever had that. The planners have never had that. When we get the parking information coming in about the parking occupancy, that’s a new set of data that we have never had.

This is very much about the planners having reliable information. And if we go through the license plate reading, we'll be able to see where trucks come into the city and where they go through.

One of our big issues at the moment is that we have got a link route that goes into the port for the trucks. It's a motorway. How many of the trucks use that versus how many trucks take the shortcut straight through the middle of the city? We don’t know that, and we can do ad-hoc surveys, but we'll hit that in real time constantly, forever, and the planners can then use that when they are planning the heavy transport options.

Gardner: I’m afraid we will have to leave it there. We have been learning about how big data, modern networks, and a tiered architectural approach has helped a transportation agency in New Zealand improve its public safety, its reaction to traffic and other congestion issues, and also set in place a historic record to help it improve its overall transportation capabilities.

So I'd like to thank our guest, Roger Jones, CTO for Auckland Transport in Auckland, New Zealand. Thank you, Roger.
Start Your
HPE Vertica
Community Edition Trial Now
Jones: Thanks very much.

Gardner: And thank you, too, to our audience for joining us for this Hewlett Packard Enterprise transformation and innovation interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on the impact and experience of using Internet of Things technologies together with big data analysis in a regional public enterprise. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

 You may also be interested in



  • Extreme Apps approach to analysis makes on-site retail experience king again
  • How New York Genome Center Manages the Massive Data Generated from DNA Sequencing


  • Microsoft sets stage for an automated hybrid cloud future with Azure Stack Technical Preview
  • The Open Group president, Steve Nunn, on the inaugural TOGAF User Group and new role of EA in business transformation
  • Learn how SKYPAD and HPE Vertica enable luxury brands to gain rapid insight into consumer trends
  • Procurement in 2016—The supply chain goes digital
  • Redmonk analysts on best navigating the tricky path to DevOps adoption
  • DevOps by design--A practical guide to effectively ushering DevOps into any organization
  • Need for Fast Analytics in Healthcare Spurs Sogeti Converged Solutions Partnership Model
  • HPE's composable infrastructure sets stage for hybrid market brokering role
  • Nottingham Trent University Elevates Big Data's role to Improving Student Retention in Higher Education
  • Forrester analyst Kurt Bittner on the inevitability of DevOps
  • Agile on fire: IT enters the new era of 'continuous' everything
  • Big data enables top user experiences and extreme personalization for Intuit TurboTax
  •