Tuesday, March 15, 2016

How HPE’s Internal DevOps Paved the Way for Speed in Global Software Delivery

Transcript of a BriefingsDirect discussion on how HPE finds the sweet spot for continuous development and delivery of software products.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the HPE Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Our next DevOps case study explores how HPE’s internal engineering and IT organizations, through research and development, are exploiting the values and benefits of DevOps methods and practices.

To help us better understand the way that DevOps can aid in the task of product and technology development, please join me in welcoming our guest, James Chen, Vice President of Product Development and Engineering at HPE. Welcome, James.

James Chen: Thank you, thank you for having me.

Gardner: First tell us a little bit about the scale of the organization. Clearly HPE is a technology company, has a very large internal IT organization, perhaps one of the largest among the global 2000.
DevOps: Solutions That Accelerate Business Innovation
And Meet Market Demands
Learn More Here
Chen: We have a pretty sizeable IT organization, as you can imagine. We support all the HPE products and solutions serving our customers. We have about 8,000 to 9,000 employees, and we have a pretty large landscape of applications, something like 2,500 enterprise-scale large applications.

Chen
We also have a six data centers that host all the applications. So it's a pretty complicated infrastructure. DevOps means a lot to us because of the speed and agility that our customers are looking for, and that’s where we embarked on our journey to DevOps.

Gardner: Tell us about that journey. How long has it been? How did you get started? Maybe you can offer how you define DevOps, because it is a little bit of loose topic in how people understand it and define it.

Chen: We've been on the DevOps journey for the last couple of years. A certain part of the organization, the developer team, already practiced somehow, somewhere, in different aspects of DevOps. Someone was driving the complete automation of testing. Someone was doing a kind of Continuous Integration and Continuous Delivery (CICD), but it never came down to the scale that we believed would start impacting the overall enterprise application landscape.

Some months ago IT embarked on what we called a pilot program for DevOps. We wanted to be the ones doing DevOps in HPE, and the only way you can benefit from DevOps and understand DevOps -- the implications of DevOps on the IT organization -- is just go out and do it. So we picked some of the very complicated applications, believing that if we could do the pilot well, we would learn a lot as an organization, and it would be helpful to the future of the IT organization and deliver value to the business.

We also believed that our learning ad experiences could help HPE’s customers to be successful. We believe that every single IT shop is thinking about how they can go to DevOps and how they can increase speed and agility. But they all have to start somewhere. So our journey would be a good story to share with our peers.

Inception point

Gardner: Given that HPE has so many different products, hardware and software, what is that you did to find that right inception point. You have a very large inventory of potential places and ways that you could start your DevOps journey. What turned out to be a good place to start that then allowed you expand successfully?

Chen: We believed the easiest way was to start with some of the home-grown applications. We chose home-grown applications because it’s a little bit easier, simply because you don’t have the scale of vendor/ISP dependence to work with.

We decided to pick a handful of applications. Most of them are very complicated and some of them are very important. A good example is the OneNote application. This is the support automation application, which touches every device, every part that we ship to our customers. That application is essentially the collection point for performance data for all the devices in the customer data center, how we monitor them, and how we deal with them.

It’s what I consider a very important enterprise scale application and it’s mission critical. That was one of the criteria, pick an application that is really complicated and most likely home-grown. The other criteria was to pick an application that the application team itself already practiced, were ready to do something, and really wanted to embrace that new methodology and new idea.

The reason behind that is that we didn't want to set up a separate team to do DevOps to pair with the existing the developer team. Ideally, we wanted the existing developer team to go into that transformation. They became the transformation driver to take the old way to do DevOps into the new DevOps. So that was second criteria, the team, the people themselves had to be motivated and get ready for a change.

The third one was the application scale and impact. We understood the risk and we understood the implications. The better understanding you have, it's easy to get buy-in from your business partners and your executive team. That’s what we chose as the criteria as far as going into DevOps.

Gardner: I'm really curious. Given this super important application for HPE, how is performance measured and managed across all of these deployments, applying DevOps methodology, and getting that team buy-in? What did it earn you? What’s the payoff? What did you see that made DevOps worthwhile?
DevOps: Solutions That Accelerate Business Innovation
And Meet Market Demands
Learn More Here
Chen: With DevOps we captured three dimensions. One is collaboration. What I mean by collaboration is taking operations into development and taking development into operations, so the operations and development teams are working side-by-side. That’s the new relationship of the collaboration.

The traditional way you did this was by the developer finishing the product and then throwing it over the wall to the operations guy. Then, when something goes wrong, we start freaking out, asking who owned the issue.

The new way is a very close collaboration between the development team and operations. From the get-go, when we start to design a product or software application, we already have people who are running the operation. They run the support in the team by understanding the risk and the implications for the operation. So that’s one dimension, the collaboration.

The second piece is about automation. You want to figure out a way that you can automate end-to-end. That’s very important. You asked a very good question about how to get buy-in from your business partners who ask, "I'm going to do CICD. What is the implication if something goes wrong?"

Powerful weapon

Automation has become a very powerful weapon, because when you can automate development, the deployment process becomes much easier to roll back when something goes wrong. Because that’s a small incremental change that you're making every time, the impact is much easier to understand. We believe the down time is much less than the normal way of doing the process. That’s the second dimension, automation.

The third one is codification. Codification is that everything is code. The old way was to define your infrastructure and have someone manually put all the infrastructure together to run an application. Those times are over.

Full DevOps is that you are able to drive a code that’s easy to configure, have your infrastructure provisioned based on that code, and get ready to run an application.

So DevOps consists of those three things. It’s truly important, the way we talk about it and the way we understand DevOps: collaboration, the codification, and automation.

Having said that, there are other implications about the organization and contingency. Those have a very profound impact on our IT organization. That’s where we understand DevOps and we're using that kind of methodology. Our thinking is to take it to the stakeholder and the customer, and show them the benefit that we're able to deliver for them. That’s the reason we get the buy-in support from the get-go.
Of course the quality, high availability, and agility have significantly improved. But I would really focus on speed.

Gardner: Is speed the number one reason to do this, or is it quality or security? What is the biggest reward when you do this well?

Chen: Speed is probably the number one reason to go to DevOps. Of course the quality, high availability, and agility have significantly improved. But I would really focus on speed, because if you ask any business owner, business partner, or your customer today, the number one challenge for them is speed.

Early in our conversation, I mentioned about automation. Traditionally we do a release every six months, because it's so complicated, as you can imagine. We have products from storage, network, server – hardware and software. If we make platform changes, in order to cover all those customers, devices, and products, it required pretty much six months to do.

Since you have the six month cycle, products issued to your customer before the next release will not have the best support on the host automation capability.

The performance of our service quality has a significant impact on customer satisfaction. Now we're talking about a release every two weeks. That’s a significant improvement, and you can see customers are happy because now with every product release, they have the automation capability within two weeks. You immediately have the best monitor and proactive care capability that we provide to our customers.

Bottom line

Gardner: I should think that that also has an impact on the bottom line, because you're able to bring new features and functions to the market, add more value to the products, and then charge more money for it. So, it allows you to get the value of your organization in to your bottom line although faster as well.

Chen: Yes. For example, we want to deliver any product or service that has a call-home capability, do the support automation, and proactively take care of them, within two weeks.  It's a huge advantage for us, because the competition typically take a few days to a couple of weeks just to install everything.

That two weeks is probably the best timing optimized for this kind of service scheme. Can we push this to one week or a few days? It's possible, but the return on investment may not be on day one.

For every application, when you make the call about DevOps, it’s not about wanting to do it as fast as possible. You want to examine your business case and determine, “what’s the sweet spot for us with DevOps?” In this particular case, we believe that looking at the customer feedback and business partners' feedback, two weeks is the right spot for us. That's significantly better than what we use to have, every six months.
DevOps: Solutions That Accelerate Business Innovation
And Meet Market Demands
Learn More Here
Gardner: I'm afraid we will have to leave it there. We've been learning how HPE’s internal engineering organization explores the values and benefits of DevOps methods and practices. I'd like to thank our guest, James Chen, Vice President of Product Development and Engineering at HPE. Thank you, James.

Chen: Thank you so much for having me.

Gardner: And I'd also like to extend a big thank you to our audience for joining this special DevOps innovation case study discussion. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a BriefingsDirect discussion on how HPE finds the sweet spot for continuous development and delivery of software products. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:


  • IoT plus big data analytics translate into better services management at Auckland Transport
  • Extreme Apps approach to analysis makes on-site retail experience king again
  • How New York Genome Center Manages the Massive Data Generated from DNA Sequencing
  • Microsoft sets stage for an automated hybrid cloud future with Azure Stack Technical Preview
  • The Open Group president, Steve Nunn, on the inaugural TOGAF User Group and new role of EA in business transformation
  • Redmonk analysts on best navigating the tricky path to DevOps adoption
  • DevOps by design--A practical guide to effectively ushering DevOps into any organization
  • Need for Fast Analytics in Healthcare Spurs Sogeti Converged Solutions Partnership Model
  • HPE's composable infrastructure sets stage for hybrid market brokering role
  • Nottingham Trent University Elevates Big Data's role to Improving Student Retention in Higher Education
  • Forrester analyst Kurt Bittner on the inevitability of DevOps
  • Agile on fire: IT enters the new era of 'continuous' everything
  • Tuesday, March 08, 2016

    IoT Plus Big Data Analytics Translate into Better Services Management at Auckland Transport

    Transcript of a discussion on the impact and experience of using Internet of Things technologies together with big data analysis in a regional public enterprise.

    Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

    Dana Gardner: Hello, and welcome to the next edition of the HPE Discover business transformation series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives.

    Gardner
    Our next top innovator case study discussion explores the impact and experience of using Internet of Things (IoT) technologies together with big data analysis to better control and manage a burgeoning transportation agency in New Zealand.

    To hear more about how fast big data supports rapidly-evolving demand for different types of sensor outputs -- and massive information inputs -- please join me in welcoming our guest, Roger Jones, CTO for Auckland Transport in Auckland, New Zealand. Welcome, Roger.

    Roger Jones: Thank you.

    Gardner: Tell us about your organization, its scope, its size and what you're doing for the people in Auckland.
    Start Your
    HPE Vertica
    Community Edition Trial Now
    Jones: Auckland Transport was formed five years ago -- we just celebrated our fifth birthday -- from an amalgamation of six regional councils. All the transport functions were merged along with the city functions, to form a super-city concept, of which transport was pulled out and set up as a council-controlled organization.

    But it's a semi-government organization as well. So we get funded by the government and the ratepayer and then we get our income as well.

    We have multiple stakeholders. We're run by a board, an independent board, as a commercial company.

    We look after everything to do with transport in the city: All the roads, everything on the roads, light poles, rubbish bins, the maintenance of the roads and the footpaths and the grass bins, boarding lights, and public transport. We run and operate the ferries, buses and trains, and we also promote and manage cycling across the city, walking activities, commercial vehicle planning, how they operate across the ports and carry their cargoes, and also carpooling schemes.

    Gardner: Well, that's a very large, broad set of services and activities. Of course a lot of people in IT are worried about keeping the trains running on time as an analogy, but you're literally doing that.

    Real-time systems

    Jones: Yeah. We have got a lot of real-time systems, and trains. We've just brought in a whole new electric train fleet. So all of the technology that goes with that has to be worked through. That's the real-time systems on the platforms, right through to how we put Wi-Fi on to those trains and get data off those trains.

    Jones
    So all of those trains have closed-circuit television (CCTV) cameras on them for safety. It's how you get all that information off and analyze it. There's about a terabyte of data that comes off all of those trains every month. It's a lot of data to go through and work out what you need to keep and what you don’t.

    Gardner: Of course, you can't manage and organize things unless you can measure and keep track of them. In addition to that terabyte you talked about from the trains, what's the size of the data -- and not just data as we understand it, unstructured data, but content -- that you're dealing with across all these other activities?

    Jones: Our traditional data warehouse is about three terabytes, in round numbers, and on the CCTV we take about eight petabytes of data a week, and that's what we're analyzing. That's from about 1,800 cameras that are out on the streets. They're in a variety of places, mostly on intersections, and they're doing a number of functions.

    They're counting vehicles. Under the new role, what we want to do is count pedestrians and cyclists and have the cyclists activate the traffic lights. From a cycle-safety perspective, the new carbon fiber bikes don’t activate the magnetic loops in the roads. That's a bone of contention -- they can’t get the lights to change. We'll change all that using CCTV analytics and promote that.

    But we'll also be able to count vehicles that turn right and where they go in the city through number plate recognition. By storing that, when a vehicle comes into the city, we would be able to see if they traveled through the city and their average length of stay.

    What we're currently working on is putting in a new parking system, where we'll collect all the data about the occupancy of parking spaces and be able to work out, in real time, the probability of getting a car parked in a certain street, at a certain time. Then, we'll be able to make that available to the customer, and especially the tradesman, who need to be able to park to do their business.

    Gardner: Very interesting. We've heard a lot about smart cities and bringing intelligence to bear on some of these problems and issues. It sounds like you're really doing that. In order for you to fulfill that mission, what was lacking in your IT infrastructure? What did you need to change, either in architecture or an ability to scale or adapt to these different types of inputs?

    Merged councils

    Jones: The key driver was, having merged five councils. We had five different CCTV systems, for instance, watched by people manually. If you think about 1,800 cameras being monitored by maybe three staff at a time, it’s very obvious that they can’t see actually what’s happening in real time, and most of the public safety events were being missed. The cameras were being used for reactive investigation rather than active management of a problem at this point in time.

    That drove us into what do we were doing around CCTV, the analytics, and how we automate that and make it easy for operators to be presented with, in real-time, here is the situation you need to manage now, and be able to be proactive, and that was the key driver.
    There’s a mix of technologies out there, lots and lots of technologies. One of the considerations was which partner we should go with.

    When we looked at that and at all the other scenes that are around the city we asked how we put that all together, process it in real time, and be able to make it available again, both to ourselves, to the police, to the emergency services, and to other third-party application developers who can board their own applications using that data. It’s no value if it’s historic.

    Gardner: So, a proverbial Tower of Babel. How did you solve this problem in order to bring those analytics to the people who can then make good use of it and in a time frame where it can be actionable?

    Jones: We did a scan, as most IT shops would do, around what could and couldn’t be done. There’s a mix of technologies out there, lots and lots of technologies. One of the considerations was which partner we should go with. Which one was going to give us longevity of product and association, because you could buy a product today, and in the changing world of IT, it’s out of business, being bought out, or it’s changed in three years time. We needed a brand that was going to be in there for the long haul.
    Start Your
    HPE Vertica
    Community Edition Trial Now
    Part of that was the brand, and there are multiple big brands out there. Did they have the breadth of the toolsets that we were looking for, both from a hardware perspective, managing the hardware, and the application perspective? That’s where we selected Hewlett Packard Enterprise (HPE), taking all of those factors into account.

    Gardner: Tell us a bit about what you're doing with data. On the front end, you're using a high-speed approach, perhaps in a warehouse, you're using something that will scale and allow for analytics to take place more quickly. Tell us about the tiering and the network and what you've been able to do with that?

    Jones: What we've done is taken a tiered approach. For instance, the analytics on the CCTV comes in and gets processed by the HPE IDOL engine. That strips most of it out. We integrate that into an incident management system, which is also running on the IDOL engine.

    Then, we take the statistics and the pieces that we want to keep and we're storing that in HPE Vertica. The parking system will go into HPE Vertica because it’s near real-time processing of significant volumes.

    The traditional data warehouse, which was a SQL data warehouse, it’s still very valid today, and it will be valid tomorrow. That’s where we're putting in a lot of the corporate information and tying a lot of the statistical information together so that we have all the historic information around real time, which was always in an old data warehouse.

    Combining information

    We tie that together with our financials. A lot of smaller changing datasets are held in that data warehouse. Then, we combine that information with the stuff in Vertica and the Microsoft Analytics Platform System (APS) appliances to get us an integrated reporting at the front end in real time.

    We're making a lot of that information available through an API manager, so that whatever we do internally is just a service that we can pick up and reuse or make available to whoever we want to make it available to. It’s not all public, but some of it is to our partners and our stakeholders. It’s a platform that can manage that.

    Gardner: You mentioned that APS appliance, a Microsoft and HPE collaboration. That’s to help you with that real-time streaming, high velocity, high volume data, and then you have your warehouse. Where are these being run? Do you have a private cloud? Do you have managed hosting, public cloud? Where are the workloads actually being supported?

    Jones: The key workloads around the CCTV, the IDOL engine, and Vertica are all are running on HPE kit on our premises, but managed by HPE-Critical Watch. That’s an HPE, almost an end-to-end service, but it just happens to be on our facilities. The rest is again on our facilities.
    So we have a huge performance increase. That means that by the time the operators come in, they have yesterday’s information and they can make the right business decisions.

    The problem in New Zealand is that there aren't many private clouds that can be used by government agencies. We can’t offshore it because of latency issues and the cost of shipping data to and from the cloud from the ISPs, who know how to charge on international bandwidth.

    Gardner: Now that you've put your large set of services together, what are some of the paybacks that you've been able to get? How do you get a return on investment (ROI), which must be pretty sizable to get this infrastructure in place? What are you able to bring back to the public service benefits by having this intelligence, by being able to react in real time?

    Jones: There are two bits to this. The traditional data warehouse was bottle-necked. If you take, from an internal business perspective, the processing out of our integrated feed system, which was a batch-driven system, the processing window each night is around 4.5 hours. To process the batch file was just over that.

    We were actually running into not getting the batch file processed until about 6 a.m. At that time, the service operators, the bus operators, the ferry operators have already started work for the day. So they weren’t getting yesterday’s information in time to analyze what to do today.

    Using the Microsoft APS appliance we've cut that down, and that process now takes about two hours, end-to-end. So we have a huge performance increase. That means that by the time the operators come in, they have yesterday’s information and they can make the right business decisions.

    Customer experience

    On the public front, I'd put it back to the customer experience. If you go into a car park and have an incident with somebody in the car park, your expectation is that somebody would be monitoring that and somebody will come to your help. Under the old system that was not the case. It would be pure coincidence if that happened.

    Under the new scenario, from a public perception, that will be alerted, something will happen, and someone will come to you. So the public safety is a huge step increased. That has no financial ROI directly for us. It has across the medical spectrum and the broader community spectrum, but for us as a transport agency, it has no true ROI, except for customer expectations and perceptions.

    Gardner: Well, as taxpayers having expectations met, it's probably a very strong attribute for you. When we look at your architecture, it strikes me that this is probably something more people will be looking to do, because of this IoT trend, where more sensors are picking up more data. It’s data that’s coming in, maybe in the form of a video feed across many different domains or modes. It needs to be dealt with rapidly. What do you see from your experience that might benefit others as they consider how to deal with this IoT architectural challenge?
    When you start streaming data in real-time at those volumes, it impacts your data networks. Suddenly your data networks become swamped, or potentially swamped, with large volumes of data.

    Jones: We had some key learning from this. That’s a very good point. IoT is all about connecting in devices. When we went from the old CCTV systems to a new one, we didn’t actually understand that some of that data was being aggregated and lost forever at the front end, and what was being received at the back end was only a snippet.

    When you start streaming data in real-time at those volumes, it impacts your data networks. Suddenly your data networks become swamped, or potentially swamped, with large volumes of data.

    That then drove us to thinking about how to put that through a firewall, and the reality is you can’t. The firewalls aren’t built to handle that. We're running F5’s and we looked at that and they would not have run the volume of CCTV through that.

    So then you start driving to other things about how you secure your data, how you secure the endpoints, and tools like looking down your networks so that you understand what’s connected or what’s changed at the connection end, what’s changing in the traffic patterns on your network, become essential to an organization like us, because there is no way we can secure all the endpoints.

    Now, a set of traffic lights has a full data connection at the end. If someone opens a cabinet and plugs in a PC, how do you know that they have done that, and that’s what we have got to protect against. The only way to do that is to know that something abnormal is there. It’s not the normal traffic coming from that area of the network, and then we're flagging it and blocking it off. That’s where we are hitting because that’s the only way we can see the IoT working from a security perspective.

    Gardner: Now Roger, when you put this amount of data to work, when you've solved some of those networking issues and you have this growing database and historical record of what takes place, that can also be very valuable. Do you expect that you'll be analyzing this data over historical time periods, looking for trends and applying that to feedback loops where you can refine and find productivity benefits? How does this grow over time in value for you as a public-service organization?

    Integrated system

    Jones: The first real payback for us has been the integrated ticketing system. We run a tag on-tag off electronic system. For the first time, we understand where people are traveling to and from, the times of day they're traveling, and to a certain extent, the demographics of those travelers. We know if they're a child, a pensioner, a student, or just a normal adult type user.

    For the first time, we're actually understanding, not only just where people get on, but where they get off and the time. We can now start to tailor our messaging, especially for transport. For instance, if we have a special event, a rugby game or a pop concert, which may only be of interest to a certain segment of the population, we know where to put our advertising or our messaging about the transport options for that. We can now tailor that to the stops where people are there at the right time of day.
    We could never do that before, but from a planning perspective, we now have a view of who travels across town, who travels in and out of the city, how often, how many times a day.

    We could never do that before, but from a planning perspective, we now have a view of who travels across town, who travels in and out of the city, how often, how many times a day. We've never ever had that. The planners have never had that. When we get the parking information coming in about the parking occupancy, that’s a new set of data that we have never had.

    This is very much about the planners having reliable information. And if we go through the license plate reading, we'll be able to see where trucks come into the city and where they go through.

    One of our big issues at the moment is that we have got a link route that goes into the port for the trucks. It's a motorway. How many of the trucks use that versus how many trucks take the shortcut straight through the middle of the city? We don’t know that, and we can do ad-hoc surveys, but we'll hit that in real time constantly, forever, and the planners can then use that when they are planning the heavy transport options.

    Gardner: I’m afraid we will have to leave it there. We have been learning about how big data, modern networks, and a tiered architectural approach has helped a transportation agency in New Zealand improve its public safety, its reaction to traffic and other congestion issues, and also set in place a historic record to help it improve its overall transportation capabilities.

    So I'd like to thank our guest, Roger Jones, CTO for Auckland Transport in Auckland, New Zealand. Thank you, Roger.
    Start Your
    HPE Vertica
    Community Edition Trial Now
    Jones: Thanks very much.

    Gardner: And thank you, too, to our audience for joining us for this Hewlett Packard Enterprise transformation and innovation interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.

    Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

    Transcript of a discussion on the impact and experience of using Internet of Things technologies together with big data analysis in a regional public enterprise. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

     You may also be interested in



  • Extreme Apps approach to analysis makes on-site retail experience king again
  • How New York Genome Center Manages the Massive Data Generated from DNA Sequencing


  • Microsoft sets stage for an automated hybrid cloud future with Azure Stack Technical Preview
  • The Open Group president, Steve Nunn, on the inaugural TOGAF User Group and new role of EA in business transformation
  • Learn how SKYPAD and HPE Vertica enable luxury brands to gain rapid insight into consumer trends
  • Procurement in 2016—The supply chain goes digital
  • Redmonk analysts on best navigating the tricky path to DevOps adoption
  • DevOps by design--A practical guide to effectively ushering DevOps into any organization
  • Need for Fast Analytics in Healthcare Spurs Sogeti Converged Solutions Partnership Model
  • HPE's composable infrastructure sets stage for hybrid market brokering role
  • Nottingham Trent University Elevates Big Data's role to Improving Student Retention in Higher Education
  • Forrester analyst Kurt Bittner on the inevitability of DevOps
  • Agile on fire: IT enters the new era of 'continuous' everything
  • Big data enables top user experiences and extreme personalization for Intuit TurboTax
  •