Thursday, November 19, 2015

Agile on Fire: IT Enters the New Era of 'Continuous' Everything

Transcript of a BriefingsDirect discussion on the concept of continuous processes around development and deployment of applications.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the HPE Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Our next DevOps Thought Leadership Discussion explores the concept of continuous processes around the development and deployment of applications and systems. Put the word continuous in front of many things and we help define DevOps: continuous delivery, continuous testing, continuous assessment, and there is more.

To help better understand the continuous nature of DevOps, we're joined by two guests, James Governor, Founder and Principal Analyst at RedMonk, and Ashish Kuthiala, Senior Director of Marketing and Strategy for Hewlett Packard Enterprise (HPE) DevOps. Welcome to you both.
Solutions That Unify Development and Operations
To Accelerate Business Innovation
Get More Information
Ashish Kuthiala: Hi, glad to be here, Dana.

Gardner: Ashish, we hear a lot about feedback loops in DevOps between production and development, test and production. Why is the word "continuous" now cropping up so much? What do we need to do differently in IT in order to compress those feedback loops and make them impactful?

Kuthiala: Gone are the days where you would see the next version 2.0 coming out in six months and 2.1 coming out three months after that.

Kuthiala
If you use some of the modern applications today, you never see Facebook 2.0 is coming out tomorrow or Google 3.1 is being released. They are continuously and always making improvements from the back-end onto the platforms of the users -- without the users even realizing that they're getting improvements, a better user experience, etc.

In order to achieve that, you have to continuously be building those new innovations into your product. And, of course, as soon as you change something you need to test it and roll it all the way into production.

In fact, we joke a lot about how if everything is continuous, why don’t we drop the word continuous and just call it planning, testing, or development, like we do today, and just say that you continuously do this. But we tend to keep using this word "continuous" before everything.

I think a lot of it is to drive home the point across the IT teams and organizations that you can no longer do this in chunks of three, six, or nine months -- but you always have to keep doing this.

Governor: How do you do the continuous assessment of your continuous marketing?

Continuous assessment

Kuthiala: We joke about the continuous marketing of everything. The continuous assessment term, despite my objections to the word continuous all the time, is a term that we've been talking about at HPE.

The idea here is that for most software development teams and production teams, when they start to collaborate well, take the user experience, the bugs, and what’s not working on the production end at the users’ hands -- where the software is being used -- and feed those bugs and the user experience back to the development teams.

When companies actually get to that stage, it’s a significant improvement. It’s not the support teams telling you that five users were screaming at us today about this feature or that feature. It’s the idea that you start to have this feedback directly from the users’ hands.

We should stretch this assessment piece a little further. Why assess the application or the software when it’s at the hands of the end users? The developer, the enterprise architects, and the planners design an application and they know best how it should function.

Whether it’s monitoring tools or it’s the health and availability of the application, start to shift left, as we call it. I'd like James to comment more about this, because he knows a lot about the development space. The developer knows his code best; let him experience what the user is starting to experience.

Governor: My favorite example of this is that, as an analyst, you're always looking for those nice metaphors and ways to talk about the world -- one notion of quality I was very taken with was when I was reading about the history if ship-building and the roles and responsibilities involved in building a ship.

Governor
One of the things they found was that if you have a team doing the riveting separate from doing the quality assurance (QA) on the riveting, the results are not as good. Someone will happily just go along -- rivet, rivet, rivet, rivet -- and not really care if they're doing a great job, because somebody else is going to have to worry about the quality.

As they moved forward with this, they realized that you needed to have the person doing the riveting also doing the QA. That’s a powerful notion of how things have changed.

Certainly the notion of shifting left and doing more testing earlier in the process, whether that be in terms of integration, load testing, whatever, all the testing needs to happen up front and it needs to be something that the developers are doing.

The new suite of tools we have makes it easier for developers to have better experiences around that, and we should take advantage.

Lean manufacturing

One of the other things about continuous is that we're making reference to manufacturing modes and models. Lean manufacturing is something that led to fewer defects, apart from one catastrophic example to the contrary. And we're looking at that and asking how we can learn from that.

So lean manufacturing ties into lean startups, which ties into lean and continuous assessment.

What’s interesting is that now we're beginning to see some interplay between the two and paying that forward. If you look at GM, they just announced a team explicitly looking at Twitter to find user complaints very, very early in the process, rather than waiting until you had 10,000 people that were affected before you did the recall.

Last year was the worst year ever for recalls in American car manufacturing, which is interesting, because if we have continuous improvement and everything, why did that happen? They're actually using social tooling to try to identify early, so that they can recall 100 cars or 1,000 cars, rather than 50,000.

It’s that monitoring really early in the process, testing early in the process, and most importantly, garnering user feedback early in the process. If GM can improve and we can improve, yes.

Gardner: I remember in the late '80s, when the Japanese car makers were really kicking the pants out of Detroit, that we started to hear a lot about simultaneous engineering. You wouldn’t just design something, but you designed for its manufacturability at the same time. So it’s a similar concept.
Solutions That Unify Development and Operations
To Accelerate Business Innovation
Get More Information
But going back to the software process, Ashish, we see a level of functionality in software that needs to be rigorous with security and performance, but we're also seeing more and more the need for that user experience for features and functions that we can’t even guess at, that we need to put into place in the field and see what happens.

How does an enterprise get to that point, where they can so rapidly do software that they're willing to take a chance and put something out to the users, perhaps a mobile app, and learn from its actual behavior? We can get the data, but we have to change our processes before we can utilize it. 

Kuthiala: Absolutely. Let me be a little provocative here, but I think it’s a well-known fact that the era of the three-year, forward-looking roadmaps is gone. It’s good to have a vision of where you're headed, but what feature, function and which month will you release so that the users will find it useful? I think that’s just gone, with this concept of the minimum viable product (MVP) that more startups take off with and try to build a product and fund themselves as they gain success.

It’s an approach even that bigger enterprises need to take. You don't know what the end users’ tastes are.

I change my taste on the applications I use and the user experience I get, the features and functionality. I'm always looking at different products, and I switch my mind quite often. But if I like something and they're always delivering the right user experience for me, I stick with them.

Capture the experience

The way for an enterprise to figure out what to build next is to capture this experience, whether it’s through social media channels or engineering your codes so that you can figure out what the user behavior actually is.

The days of business planners and developers sitting in cubicles and thinking this is the coolest thing I'm going to invent and roll out is not going to work anymore. You definitely need that for innovation, but you need to test that fairly quickly.

Also gone are the days of rolling back something when something doesn’t work. If something doesn’t work, if you can deliver software really quickly at the hands of end users, you just roll forward. You don’t roll back anymore.

It could be a feature that’s buggy. So go and fix it, because you can fix it in two days or two hours, versus the three- to six-month cycle. If you release a feature and you see that most users -- 80 percent of the users -- don’t even bother about it, turn it off, and introduce the new feature that you were thinking about.

This assessment from the development, testing, and production that you're always doing starts to benefit you. When you're standing up for that daily sprint and wondering what are the three features I'm going to work on as a team, whether it’s the two things that your CEO told you you have to absolutely do it, because "I think it’s the greatest thing since sliced bread," or it’s the developer saying, "I think we should build this feature," or some use case is coming out of the business analyst or enterprise architects.
We have wonderful new platforms that enable us to store a lot more data than we could before at a reasonable cost.

Now you have data. You have data across all these teams. You can start to make smarter decisions and you can choose what to build and not build. To me, that's the value of continuous assessment. You can invest your $100 for that day in the two things you want to do. None of us has unlimited budgets.

Gardner: For organizations that grok this, that say, "I want continuous delivery. I want continuous assessment," what do we need to put in place to actually execute on it to make it happen?

Governor: We've spoken a lot about cultural change, and that’s going to be important. One of the things, frankly, that is an underpinning, if we're talking about data and being data-driven, is just that we have wonderful new platforms that enable us to store a lot more data than we could before at a reasonable cost.

There were many business problems that were stymied by the fact that you would have to spend the GDP of a country in order to do the kind of processing that you wanted to, in order to truly understand how something was working. If we're going to model the experiences, if we are going to collect all this data, some of the thinking about what's infrastructure for that so that you can analyze the data is going to be super important. There's no point talking in being data-driven if you don’t have architecture for delivering on that.

Gardner: Ashish, how about loosely integrated capabilities across these domains, tests, build, requirements, configuration management, and deployment? It seems that HPE is really at the center of a number of these technologies. Is there a new layer or level of integration that can help accelerate this continuous assessment capability?

Rich portfolio

Kuthiala: You're right. We have a very rich portfolio across the entire software development cycle. You've heard about our Big Data Platform. What can it really do, if you think about it? James just referred to this. It’s cheaper and easier to store data with the new technologies, whether it’s structured, unstructured, video, social, etc., and you can start to make sense out of it when you put it all together.

There is a lot of rich data in the planning and testing process, and all the different lifecycles. A simple example is a technology that we've worked on internally, where when you start to deliver software faster and you change one line of code and you want this to go out. You really can’t afford to do the 20,000 tests that you think you need to do, because you're not sure what's going to happen.

We've actually had data scientists working internally in our labs, studying the patterns, looking at the data, and testing concepts such as intelligent testing. If I change this one line of code, even before I check it in, what parts of the code is it really affecting, what functionality? If you are doing this intelligently, does it affect all the regions of the world, the demographics? What feature function does it affect?
We've actually had data scientists working internally in our labs, studying the patterns, looking at the data, and testing concepts such as intelligent testing.

It's helping you narrow down whether will it break the code, whether it will actually affect certain features and functions of this software application that’s out there. It's narrowing it down and helping you say, "Okay, I only need to run these 50 tests and I don't need to go into these 10,000 tests, because I need to run through this test cycle fast and have the confidence that it will not break something else."

So it's a cultural thing, like James said, but the technologies are also helping make it easier.

Gardner: It’s interesting. We're borrowing concepts from other domains in the past as well -- just-in-time testing or fit-for-purpose testing, or lean testing?

Kuthiala: We were talking about Lean Functional Testing (LeanFT) at HP Discover. I won't talk about that here in terms of product, but the idea is exactly that. The idea is that the developer, like James said, knows his code well. He can test it well before and he doesn’t throw it over the wall and let the other team take a shot at it. It’s his responsibility. If he writes a line of code, he should be responsible for the quality of it.

Gardner: And it also seems that the integration across this continuum can really be the currency of analysis. When we have data and information made available, that's what binds these processes together, and we're starting to elevate and abstract that analysis up and it make it into a continuum, rather than a waterfall or a hand-off type of process.

Before we close out, any other words that we should put in front of continuous as we get closer to DevOps -- continuous security perhaps?

Security is important

Kuthiala: Security is a very important topic and James and I have talked about it a lot with some other thought leaders. Security is just like testing. Anything that you catch early on in the process is a lot easier and cheaper to fix than if you catch it in the hands of the end users, where now it’s deployed to tens and thousands of people.

It’s a cultural shift. The technology has always been there. There's a lot of technology within and outside of HP that you need to incorporate the security testing and the discipline right into the development and planning process and not leave it towards the end.

In terms of another continuous word, I mean I can come up with continuous Dana Gardner podcast.

Governor: There you go.

Gardner: Continuous discussions about DevOps.
One of the things that RedMonk is very interested in, and it's really our view in the world, is that, increasingly, developers are making the choices, and then we're going to find ways to support the choices they are making.

Governor: One of the things that RedMonk is very interested in, and it's really our view in the world, is that, increasingly, developers are making the choices, and then we're going to find ways to support the choices they are making.

It was very interesting to me that the term continuous integration began as a developer term, and then the next wave of that began to be called continuous deployment. That's quite scary for a lot of organizations. They say, "These developers are talking about continuous deployment. How is that going to work?"

The circle was squared when I had somebody come in and say what we're talking to customers about is continuous improvement, which of course is a term again that we saw in manufacturing and so on.

But the developer aesthetic is tremendously influential here, and this change has been driven by them. My favorite "continuous" is a great phrase, continuous partial attention, which is the world we all live in now.

Gardner: I'm afraid we will have to leave it there. We've been exploring the concept of continuous processes around development and deployment of applications.

I'd like to thank our guests, James Governor, Founder and Principal Analyst at RedMonk, and Ashish Kuthiala, Senior Director of Marketing and Strategy for HPE DevOps.
Solutions That Unify Development and Operations
To Accelerate Business Innovation
Get More Information
I'd also like to thank our audience for joining this special DevOps Thought Leadership Discussion. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a BriefingsDirect discussion on the concept of continuous processes around development and deployment of applications. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

No comments:

Post a Comment