Showing posts with label DX. Show all posts
Showing posts with label DX. Show all posts

Sunday, August 04, 2024

How The Open Group Portfolio of Digital Open Standards Supports Your Digital Business Transformation Journey

Transcript of a discussion on how a digital portfolio of standards and methods instructs innovation internally to match the demands of a rapidly changing, increasingly competitive, and analytics intensive global marketplace.

Listen to the podcast. Find it on iTunesDownload the transcript. Sponsor: The Open Group.

 

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Our next enterprise architecture (EA) discussion explores how a comprehensive portfolio of open standards and associated best practices powerfully supports digital business transformation.


Gardner

As companies chart a critical course to adopt agility using artificial intelligence (AI)-driven benefits, they need proven and actionable structure to help deepen customer relationships, improve internal processes, and heighten business value outcomes.

 

Stay with us now as we explore how The Open Group Portfolio of Digital Open Standards instructs innovation internally to match the demands of a rapidly changing, increasingly competitive, and analytics intensive global marketplace.

 

Here to explore how to strategically architect for ongoing disruption and innovation is our expert guest. Please join me in welcoming Sonia Gonzalez, Digital Portfolio Product Manager at The Open Group. Welcome, Sonia.

 

Sonia Gonzalez: Thank you very much, Dana, for having me here. I’m really happy to talk about this exciting topic.

 

Gardner: Yes, it’s a very interesting time. There are lots of interesting and relevant topics to dig into. Sonia, what are the latest trends and catalysts propelling the need for a full portfolio of standards to better attain digital transformation?

Gonzalez
Gonzalez: Digital transformation is something that can completely change your company. It’s a process, a journey. But to do that, you need to start from the top. Meaning from a strategy; you need to have a digital strategy. Because you must change your business and operational models, you need to build new capabilities – and not just technical resources or technologies, but also the people. You need to train your people to pursue innovation and to create your business around customer centricity.

Of course, you need to take advantage of new technologies and trends, such as AI, the metaverse, cybersecurity and cybercrime, which are threats right now. You also must leverage the new power of computing in data processing, data analytics, and machine learning (ML).

 

If you don’t take advantage of all of that, then you’re going to be left behind. That’s why, a few years ago, we started this journey to conform the Portfolio of Digital Open Standards as a collection of practices that will allow any company to start the process, either if they are new to this (starters), or to maintain and especially sustain an already existing digital transformation process and effort.

That’s the reason we need the portfolio. We need different perspectives. It’s not only the technology, not only the people -- but it’s also everything that surrounds your company.

Gardner: Sure. And, I suppose, we used to look for silos within an organization with areas of expertise and would develop standards that might pertain specifically to them. But nowadays it’s important to have cross-pollination, if you will, and to look at standards not only in an isolated part of a company, but across them. We seek ways for them to better support each other.

 

Please explain why it is so important to seek a full portfolio of standards that are, if not integrated, complementary.

 

Digital change demands diversification

 

Gonzalez: Yes. The key words here are synergy, cross-collaboration, and consistency. All of our standards are very powerful on their own. For example, we have the TOGAF® Standard, which is one of the major recognized standards for Enterprise Architecture (EA). We have The Open Agile Architecture, a Standard of The Open Group, for agility, you have the Digital Practitioner Body of Knowledge (DPBoK™) for digital; we have the ArchiMate® Standard for modeling in EA, we have theIT4IT™ standard for the IT processing and digital products.

 

But if all of them are used together in a consistent way, they become more powerful. They are greater than the sum of their parts. They provide a full and more systemic view of your issues.

 

For example, to make digital transformation possible, you need EA because you need to know and understand your capabilities, your landscape, and your actual measured capabilities to identify the gaps. But you also need to be able to manage digital products. And for that we have the IT4IT™ standard.

We need to pursue more agility, not only in the development or in the technical processes, but on the business as a whole. You need to become an agile enterprise.

We need to pursue more agility, not only in the development or in the technical processes, but in the business as a whole. So, you need to become an agile enterprise, and for that we need the Open Agile Architecture standard. Also, the TOGAF standard in which you will have guidance about agility.

 

If you want to have more customer-centricity and a learning progression toward digital, we have the DPBoK™ that precisely helps you understand how we can start very small and then continue evolving your capabilities, determine what you’re learning in the process, and how you can be evolving all those capabilities.

 

Those are the main aspects to understand and the benefit of having this portfolio. It’s not only that the content is in there in the same channel, which is completely HTML, you can cross navigate between the standards, you can make searches where you will find content from the different standards. We have also added some graphics icons in there, because as you know, people need very rapid solutions to the problems now.

 

Last April in Edinburgh as part of our summit, we released a new version of the Portfolio of  Digital Open Standards. If you click, for example, in agile practice, immediately you will see content from the Open Agile Architecture. But from there you can navigate to the TOGAF Standard, and you can navigate to the IT4IT Standard for example. So that’s a way that you can, as a user, you learn from different practices because at the end of the day, it doesn’t matter if you are using the TOGAF Standard or the Open Agile Architecture or the DPBoK™. What you want and what you need is a solution to your problem. And you find that through this synergy that is found in the portfolio.

 

Gardner: The Open Group is a long and venerated organization going back to the standardization around the UNIX® platform back in the day. We don’t need to go that far back, but I would like to put a little context around how the whole concept of a digital portfolio working as a concerted effort among different disparate parts came about. Can you give us some background and history as to how the digital portfolio came about and where it is now?

 

How we got here

 

Gonzalez: Actually, that’s an interesting story. I started back around 2015. I remember that I was at one of the events and I remember being in an internal meeting, where there were some people from a few member companies who started this activity. At the moment, I was the Architecture Forum Director but as you know, architecture is closely connected with digital. The first name of this workgroup was not Digital Portfolio Workgroup as it is right now, but it was something related, such as digital business and customer experience working group. It was a very long name, and the idea was to combine digital with customer centricity.

 

So, we started working on that. In the beginning it was a very close, small group of members. Then a decision was made to make it a work group in order to allow members from different Forums such as Architecture, ArchiMate, IT4IT to engage. We started to grow and grow. We delivered a couple of white papers that have been published and are very good. Eventually, we decided to change the name from Digital Practitioner Workgroup. At a certain point, it was also led by a subgroup of platinum members at the board level, that happened around four or five years ago. They said, okay, we need to move this forward. So besides having content that is related to digital, we need to start to provide this content as a digital product.

So, we started the process to deliver standards as code. That’s why this small group of members started this work initiative, like I mentioned--it was led by one of our staff members. The idea was to start taking all this content, put it in an open source platform called GitLab, and produce the output in another open source platform called Antora, that allows you to generate from different GitLab repositories using markup language (AsciiDoc) , HTML-based output, which is designed to cross navigate between content, which is one of the main principles of the portfolio. And they also started building the first graphics icons and the photographic interfaces.

At that moment, it was a work activity led by members. Eventually, around two years ago, it was decided that it become part of the staff activity, to be maintained by members in the Digital Practitioner Workgroup. I moved from Architecture Forum space, and I was named the Digital Product Manager for the portfolio. We decided to change the name to Digital Portfolio Workgroup because we didn’t maintain only the DPBoK™ Standard anymore. We maintained the entire environment. We pursued cross collaboration. We tried to engage members from different forums.

The idea was to start taking all this content and put it in an open source platform. ... designed to cross navigate to deliver operating standards.

The task now has three main components. One is the content, which is provided for our members. The second is the platform that we have in which we are producing or delivering our standards as digital products. And the third one is delivering standards as code using also GitLab and Antora, facilitating a more agile DevOps oriented way of delivering standards into the market, which I believe is something innovative. I don’t believe any other standard organization is going that deeply into standards as code. It might be the case, but we are the ones who are starting with this activity.

 

The story is that we went from being a very small work group that then became more active, and then it became something that was driven by staff and with the collaboration of members in the work group.

 

At the moment, we have already delivered four different releases. The first one was in Edinburgh, in October 2022. Since then, we have had several releases. We have been adding new standards into the portfolio. We have been adding new guides because the portfolio is meant to have standards, snapshots, and guides. We started improving the landing pages and creating new landing pages. For example, we added the ArchiMate specification along with the landing page in the release in October 2023 in Houston.

 

For the one that we had this year in Edinburgh, we improved the landing page for the IT4IT™ Standard, which is called the Life cycle, and it also has some links to the DPBoK™ We made some improvements to the general user interface and very importantly, we received some feedback from members saying we need case studies, we need more than how, we have guides, we have standards, but how other people have made this work in a practical way. So, we start talking with members of the Government EA Group and the ArchiMate Standard, because the ArchiMate standard they are the ones that have also several case studies.

 

So, we constructed another instance in GitLab and Antora called the case study collection. This case study collection is meant to have case studies from different verticals, and it is connected with the portfolio. So, in the most recent release, I gave a demo in which you can click on one of the icons in the portfolio and you will be directed to the second instance where the case studies live. From there, you can go to categories such as government, healthcare, or banking and finance.

Also, you can go to physical items such as mechanical constructions and things like that where we have related case studies available. At the moment, we have migrated five case studies, two case studies from ArchiMate®, the ArchiSurance and ArchiMetal, and the other case studies are from the Government EA Work Group. At the moment, we have a very long queue for requests for more content to be added, more guides, more case studies.

Also, at the moment, the connection with the TOGAF Standard is through the TOGAF Standard 10th Edition. We have started the migration to also add the whole set of the TOGAF Standard into the portfolio, which is the highest priority for the rest of the year. Our objective is to have an incremental, ongoing improvement of the portfolio. So that’s where we are right now.

 

Gardner: Wonderful. Thank you for that comprehensive overview. Let’s go up a few thousand feet and start to talk about why this is so important. Everybody agrees that digital transformation is essential and integral to their success, but not very many people agree on how to go about it. As organizations are facing the need to transform and consider more of the benefits from ML and other analytics technologies, what are the challenges? What prevents people from being able to transform and innovate in their organizations as quickly and as powerfully as they’d like?

 

Where to develop your digital strategy

 

Gonzalez: I think one of the main challenges is to know where you are, you know, that’s why EA is such an important pillar in this. If you don’t know what you have, it is impossible to have a feasible path ahead to becoming digital. You need to understand your current state. Do you have a digital strategy? Do you have a strategy at all? And is it clear? Has it been shared?  When you have your strategy, you start having a digital strategy. You start considering, okay, I already have these processes, I have this business lines, I have this product. It doesn’t matter if you are a company that is completely in the digital product business, it can be applicable to all companies. We had a very interesting case, last week in Edinburgh, about banking in Turkey. For that digital transformation, first they identified their current state, and then they identified again following the EA principles.  So, start small and grow incrementally

 

Identify an area that is less difficult to digitize and digitalize and transform and deliver the first outcome, and then you can continue to iterate incrementally. For example, if you are a company that is having issues with the supply chain, you go and make a value stream assessment and a value chain assessment and start identifying what you need to digitize in there. To digitize is to put your information in digital format.

 

And sometimes people, companies, they don’t even have that. They may have perhaps information in digital format, but it’s not consolidated. So, data analytics and data transformation are one of the steps. After you have digitized that, you need to start making the digital transformation with your processes and your capabilities. Meaning your people, your applications, your infrastructure. You need to make an assessment of that.

When you talk about digital transformation, it's not a one-time effort, it's a continuous and ongoing journey. You start doing it incrementally. 

For example, you need to ask yourself: Do I have the right capabilities for this? If I need to start having a manual process become completely automated and put it in one of our channels to become more customer facing, I need to improve the process. I need to train the people, probably hire new people. I need to automate, and I need to build a new channel for that, which of course comes with application and infrastructure layers. It’s a step-by-step process that starts with identifying where you are, where you want to be, and how to get there incrementally.

 

That’s why when you talk about digital transformation, it’s not a one-time effort, it’s a continuous and ongoing process. It’s a journey. You need to start doing it incrementally. In this case of the bank mentioned before, they started first by putting the data in order. They selected a critical process and improved it, and then they started improving the channels and then they started doing more things in digital and then started taking advantage of other technologies, such as AI, in their channels in the later stage. You need to start first with the simpler things that you need to do, because otherwise, if you start just changing things without connection and alignment without the impact assessment, the systemic view, you will be lost and the transformation effort will fail.

 

For example, people believe that, okay, there’s a new technology coming, I will implement it. They don’t take into consideration the current infrastructure, the current applications, the current capabilities of the people they have, whether they have legacy systems, if they have people trained. If they implement this new technology for the sake of implementing, it creates technical debt and unnecessary risks and even potential security breaches.

That’s why again, risk assessment is another key component of this. If you are transforming, but you are not aware of the risk, then you are creating another issue for your company. So, it’s a step-by-step process that you need to take. That’s why I always say that it’s very similar to EA, only in this case the customer is at the center and considering going digital. That’s the difference.

Gardner: Right. And certainly, undertakings to do digital transformation can swiftly become very complex and unwieldy. But when you apply structure, pragmatism, documentation, making sure that everyone is on the same page and collaborating accordingly, that complexity becomes much more managed. So as organizations use such things as the Open Group Digital Portfolio, what are some of the salient and most important benefits of that they’ll start to see?

 

Things that are perhaps a little intangible, difficult to measure, but nonetheless very important. What do you get when you do this right?

 

Measure your processes’ progress

 

Gonzalez: I think measurement is perhaps one of the more difficult things. For example, one thing that could be measurable is if you are delivering a product in a certain amount of time. You improve and digitize and digitalize to make your process better. You used to spend two hours on that; now you spend an hour and a half. That’s a metric. The other one is how efficient your people are becoming because one of the conditions for becoming digital is to transform your organizational structure.

 

If you have a structure that is too hierarchical with a lot of levels and people don’t have the freedom to act, it is very difficult to become agile or to become digital. But when you have autonomous teams, and each one of them has ownership of a specific part of the company or a product or a line of product and you have a cross-collaboration between them such as multidisciplinary autonomous teams and you give them ownership, you can start measuring what they are doing.

If you have a structure that is too hierarchical with a lot of levels and people don't have the freedom to act, it is difficult to become agile or digital. When you have autonomous teams, you can measure what they are doing.

Similar to the earlier scenario, you might have a one, two or three-day cycle and then you make a retrospective review and you re-measure, that’s what you need to do to start. For example, it would not be a good approach to make it three months and not measure what you’re doing. It should be shorter, smaller cycles. And again, you need to create these autonomous teams. In the O-AA™ Standard, there’s a lot of very good content about autonomous team and the fact that sometimes we organize our capabilities for physical or human resources following the same structure of the company.

 

So, they become very silo-oriented and very functional. The moment that you start seeing for example, how to deliver a new product into the market. Let’s say it’s a financial product, a new kind of loan or whatever. I need the person that manages loans. I need the person that manages legal aspects. I need the person that will handle the insurance policies that are related to that loan.

 

Of course, I need IT people, and I need data people. So, when I take this group of people together and they become autonomous, with a certain level of control of course, then they should be able to deliver this product to the final line easier and faster than if you have a very hierarchical structure in which you need permits and in which every area only owns and sees their own piece of work and not the whole thing. You need to own your process. You need to own your product if you really want to succeed at this.

 

Gardner: And of course, the proof of the pudding is in the eating, and you mentioned earlier that you’re now focusing on some case studies, which I imagine illustrate the important use cases. It’s another way of understanding the benefits of a digital portfolio and EA and a strategic approach to transformation. You don’t have to go into too much detail because I’m sure people will be able to access and review these case studies on their own, but maybe you could go through a few of the new case studies and why these use cases were so salient to begin with.

 

Case-by-case connections

 

Gonzalez: I think they are important. Some of them, especially the ones from the Government EA such address actions taken by the Indian government. They took on a project to improve certain areas in the government in order to offer better services to citizens. They started to improve the processes and applications around them. In the end, they started measuring the from the start, state A, and where they were at the end in terms of citizen satisfaction. There was a measure for that.

There was another very interesting one, but we haven’t migrated that yet. It is use of digitalization and digital information to improve the COVID vaccination process. At the beginning it was very messy. As you know, it was a health issue related to people’s lives. It was critical. It began with some initial automation. Then they created a portal in which people were able to create an appointment for their vaccinations with details such as how to get there, how to get their vaccine, how to access and update their medical records.

If they had COVID before, what were the symptoms? Which dose might they need? The process made it around different areas of the country by starting small and then growing and growing. In the end, even though the COVID emergency is somewhat lessened now, they decided to keep the project going in different stages and use it in other parts of the healthcare system, because healthcare is one of the sectors that needs this more.

 

You know, sometimes healthcare systems are inefficient because they are not connected. You don’t connect your medical supplies with your patients or with your healthcare centers. Sometimes some health centers will have medicine, but some others will not have it, and that’s also another case study that may be coming soon, probably in the next month, into the healthcare-related section of the portfolio.

 

As you know, we have a Healthcare Forum in The Open Group. They maintain a Reference Architecture which is into improvement, and we have a case study from a hospital. And they are working on how to better connect the different elements in their value chains, such as healthcare centers, providers either for medicine or medical equipment, patients, patients’ records, and all that. This content might be incorporated into the portfolio later this year.

Sometimes healthcare systems are inefficient because they are not connected. Now they are working to better connect different elements throughout their value chains. 

In another case study, they implemented an Enterprise Service Bus (ESB) to be able to improve their processes. They had a process that was not really reused in the services, so they improved that ESB to be able to provide better service to their citizenship.

 

As I said, all of the case studies are on our website, and I invite you to look it over. All of them are very interesting and they describe the benefits of using The Open Group standards. They also mention other practices because we cannot pretend like we, as The Open Group, hold all the responses. There are other standards in the market that are valuable and important. That’s why whenever we refer to a third party in our standards for publications, we always ask for legal permission. But those case studies also have that connection to external sources, and they are completely reachable through the case studies.

 

Gardner: Right. And so, looking to the future, digital transformation now is perhaps under pressure as organizations adjust to more AI and analytics, and data driven decision-making. It seems that there’s pressure on organizations to move quickly. If their competitors do AI perhaps better than they do, they could find themselves at a disadvantage.

 

As we look to the future of how digital transformation unfolds and how the digital portfolio can be very instrumental and accelerate their objectives and success, how is AI impacting this equation? What should we expect over the next several years in terms of how organizations can make AI a friend rather than a foe?

 

Put AI into the proper context

 

Gonzalez: Yes. That’s a very good question. Actually, that was one of the main themes at our event in Edinburgh. We had very good presentations there. I think AI has two different edges, like you said. It has the good one, which is that it could be very powerful if used properly but you need to be aware that it’s not magic. If you have acquired an AI tool or have partnered with someone that provided you with that tool, you need to be aware that first you need to have the right data for it, the right information to inject into that tool. You need to give that tool training because there are some patterns that they have. You need to do programming to be able to really have the AI give you the right response. The tool also needs the right context for the data you are feeding into it to receive a consistent output

 

You need to make a special request, and you need to give some training, invest some time, and feed data and training into it so that this AI activity can serve you for several purposes. It can be just for cognitive analysis which is useful on its own. It can be very helpful for giving you decision-making support and it can be the other side of that, which is generative AI where it is generating new content from your input.

 

So, you put that text or images or content in whatever form and it should be able to create a video, to create a story, to create a summary, to create a report, to create whatever kind of output that you need. But in order to do that, of course, there are some very critical capabilities that you need to build and that’s why again, if you are in the middle of your digital transformation and you want to include AI, you need to be sure that you have that capability.

 

For example, critical questions to ask are: Do I have the right data to feed into that AI tool? Do I have the right texts, images, and content because you need to give context. Even a simple ChatGPT that you use in your phone, if you ask a question and you don’t give the context, it will provide a very inadequate response. You need to give context. Sometimes the same AI will ask you “I need more context”. So, you need to provide the context.

You need to have an algorithm in that AI tool that is able to interpret the data you’re giving it to predict because they learn from experience and to act on that and you need to be able also to address your output.

The output could be a success, but it could be a failure. It’s not like because AI told me this, I’m going to use it. And also, you need to be adjusting the algorithm that it uses, the learning process, the outcomes, analytics.

 

The other thing that is very important is to understand that there are different components around AI, we have ML, we have a neural network, which is the way that this information is processed and when it is able to learn from the past experience.

 

Of course, there’s another one called natural language processing (NLP) and learning, which, by the way, is one of the more active working groups in The Open Group. Of course, they need cognitive computing, so we need to be aware that you need resources, infrastructure for this.

 

And now that we are thinking about sustainability, AI is going to become one of the threats in terms of sustainability.

 

We had another very good workshop in Edinburgh about the carbon footprint and even though AI is good, as all technology could be good, its footprint is actually going to become exponential because AI requires a lot of technical resources and power consumption.  It needs a lot of computing processes to be able to generate the output, especially if you’re going to generate complex things such as a video or song or any other kind of generative content.

We're already seeing evidence of jurisdictions looking at the increasing power and water demand ... so they can understand the implications on utilities and infrastructure of these new AI workloads.

And of course you need to have a vision. Why do I need AI in the first place? Is it really that I need it or because it’s in fashion, because everybody is talking about it. You need to have the right capabilities for that. For example, you have to have an AI policy. We already have an AI policy at The Open Group that we have shared with our members. For example, confidentiality is one factor. You should be very careful if you put some critical information in an AI tool. If that AI tool is behind a locked, closed door, where it’s not public, then probably you’re fine. But if you take a public tool and you put private data in there, then there can be cybersecurity threats. Cybercriminals are already using it.

 

Another important aspect for the AI policy is legal considerations. Legal needs to be involved in the policy. You need to also make a risk assessment before implementing it and of course you need to be sure that you have the technical critical capabilities.

 

And there’s another important one. You need to teach your people. I think everybody talks about AI and it’s an interesting topic, but we all need to learn more about it. I know people that are already experts, but we need to learn more about it before using it in the company because it can become a threat

 

You may have already heard in the news that there are already criminals using AI to generate fake calls. You receive a call and it’s your son’s voice telling you that he has been kidnapped. So how would you react to that? Or there are artists and singers that have been making demands and lawsuits because someone has imitated their voice. They’re singing using AI.

 

It can be used very well or very poorly depending on the programming which is also a little scary. For example, you can use, and it’s already being done, you can use drones or small robots in a war zone just to see if there are people that are alive who need to get medical care, but you can also use it to kill people.

 

It’s a double edge. I think it’s excellent to take advantage of what humans can do with technology, but it can be used -- it should be used very carefully, and it should be used knowing that you have the right capabilities and especially can respond to the critical questions--Why do I need AI? How am I going to implement and what am I going to do with it? That is my advice in that regard.

 

Gardner: Great. Well, it sounds like not only is the digital portfolio very important for making your journey to digital transformation smoother and faster, but it certainly also sounds like it’s very important to mitigate the risk when it comes to adopting AI and similar technology. So that’s very exciting for the future and I hope you get a chance to talk about that more.

 

Before we close out, Sonia, please help our readers and listeners understand how they can become more actively involved with The Open Group in terms of events, certifications, resources and specifically how they can start to avail themselves of the digital portfolio.

 

The Open Group support and opportunities

 

Gonzales: Okay, thank you for that. First of all, I invite our listeners to go to our website. On there, you will find important information about all of our standards. On the main page, you have to scroll down a little bit. You will find something called the Portfolio Digital Open Standards.

 

In there, you can read what it is. You can click and you will know more about it. There’s a video from our co-chair and you can click and actually you can experiment with the thing live. It’s completely live now and you can start navigating in there, trying it. You can make searches. You can go to the case study collection.

 

More importantly, please give us your feedback. There’s a small icon in the top right of the screen, which you can click on and send us your feedback. It’s completely private. That feedback is only seen by The Open Group staff. So don’t be shy about putting your email in there because it’s something that we treat very carefully.

Also, in our website, we have information about certification. Also in certification, we are going into a learning progression. Those of you that are familiar with the TOGAF certification program, it has gone now into this badge program in which you become certified in level 1 and 2 in the TOGAF standard.

But then you can become specialized in business architecture, risk and security, agile, digital. We had to couple that precisely to give this more agility and we are going to follow a similar one for the DPBoK™ Certification. There’s already plans to restructure the DPBoK and to also restructure their certification program.

 

More importantly, if you want to contribute with us besides giving us feedback, you can become a member of The Open Group. It has a lot of advantages. You can go to our website. You will find information how to become a member. You will see the cost for a silver, gold, or platinum member. You will see a little bit about the benefits. You can ask for more information, reach out to our business team.

 

Also, you can send me an email. Please contact me, drop me an email and I’ll be more than happy to help you through the portfolio. You don’t have to be a member to have some kind of an onboarding session with me. I can give you at any moment on onboarding session and explain to you what we are doing even if you want to just know the portfolio, or you want to become a member. We can also engage with our business team for them to give you more explanation about the process of becoming a member.

 

If you want to become certified and you’re not sure of the process, you can also reach out to me. If you are a tool vendor and you want to certify your tool, you can also go to The Open Group and you can reach me, and I can lead you to our certification team people and they should be able to serve you.

 

Also, please go to our social media. We have our YouTube channel in there. You will see a lot of videos, The Toolkit Tuesday, testimonials, The Open Group Blogs. We recently published a blog about the digital portfolio. We are soon going to publish a survey. It’s going to be sent to our social media probably LinkedIn, so be aware of that. Also, we have our podcast like this one, our blogs. Use our social media. You will find a lot of information in there.

 

And in terms of proceedings, especially if you attended the session, you could go to the proceeding and see what we discussed last week in our Edinburgh about AI. Ecosystem Architecture is another topic that we are taking very seriously at The Open Group. Sustainability is another one like I mentioned. Sometimes there’s a trade-off between technology and the environment which is becoming more and more relevant now.

 

Reach us through our social media, through email, or through our web page and we will be more than happy to give you more information.

 

Gardner: Well, great. I’m afraid we have to leave it there. You’ve been listening to a sponsored Briefings Direct discussion on how a comprehensive portfolio with open standards and associated best practices powerfully supports analytics-rich digital business transformation.

And we’ve learned how The Open Group’s latest digital portfolio of standards and methods instructs innovation internally to match the demands of a rapidly changing, increasingly competitive and analytics-intensive global marketplace.

So, a big thank you to our expert guest. We have been here with welcoming Sonia Gonzalez, Digital Portfolio Product Manager at The Open Group. Thank you so much, Sonia.

 

Gonzales: Thank you very much, Dana, again for having me and thank you to our listeners for listening to your podcast and providing feedback. Thank you.

 

Gardner: Yes, a big thank you to our audience for joining this Briefings Direct strategic enterprise architecture discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of insightful discussions sponsored by The Open Group.

 

Thanks again for listening. Please pass this along to your enterprise architecture and business agility communities and do come back next time.

 

Listen to the podcast. Find it on iTunesDownload the transcript. Sponsor: The Open Group.

 

Transcript of a discussion on how a digital portfolio of standards and methods instructs innovation internally to match the demands of a rapidly changing, increasingly competitive, and analytics intensive global marketplace. Copyright Interarbor Solutions, LLC and The Open Group, 2005-2024. All rights reserved.

 

You may also be interested in:

Wednesday, May 15, 2024

Make AI Adoption a Strategic, ROI-Focused, Fit-for-Purpose and Sustainable Transformation, Says HPE

Transcript of a discussion on how energy use and resources management have emerged as key ingredients of artificial intelligence adoption success -- or failure. 

Listen to the podcast. Subscribe to the podcast. Download the transcript. Sponsor: Hewlett Packard Enterprise.


Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on best practices for deploying artificial intelligence (AI) with a focus on sustainability and strategic business benefits.

Gardner

As AI rises as an imperative that impacts companies at nearly all levels, proper concern for efficiency around energy use and resources management has emerged as a key ingredient of success -- or failure. It’s becoming increasingly evident that AI deployments will demand vast resources, energy, water, skills, and upgraded or wholly new data center and electrical grid infrastructures.
 

Stay with us now as we examine why factoring the full and long-term benefits — accurately weighed against the actual costs — is essential to assuring the desired business outcomes from AI implementations. Only by calculating the true and total expected costs in the fullest sense can businesses predict the proper fit-for-purpose use for large deployments of AI systems.


Here to share the latest findings and best planning practices for sustainable AI is John Frey, Director and Chief Technologist of Sustainable Transformation at Hewlett Packard Enterprise (HPE). Welcome, John.

John Frey: Thank you. It’s great to be here.

 

Gardner: It’s good to have you back.

 

John, AI capabilities have moved outside the traditional boundaries of technology, data science, and analytics. AI has become a rapidly growing imperative for business leaders -- and it’s impacting the daily life of more and more workers. Generative AI, and more generally large language models, for example, are now widely sought for broad and varied uses.

 

While energy efficiency has long been sought for general IT and high-performance computing (HPC), AI appears to dramatically up the game on the need to factor and manage the required resources.

 

John, how much of sea change is the impact of AI having on all that’s needed to support these complex systems?

 

AI impact adds up everywhere

 

Frey

Frey: Well, AI certainly is an additional load on resources. AI training, for example, is power-intensive. AI inferencing acts similarly, and obviously is used again and again and again if users use the tools as designed for a long period of time.

 

It remains to be seen how much and how quickly, but there’s a lot of research out there that suggests that AI use is going to rapidly grow in terms of overall technology demand.

 

Gardner: And, you know, we need to home in on how powerful and useful AI is. Nearly everyone seems confident that there’s going to be really important new use cases and very powerful benefits. But we also need to focus on what it takes to get those results, and I think some people may have skipped over that part.

 

Frey: Yes, absolutely. A lot of businesses are still trying to figure out the best uses of AI, and the types of solutions within their infrastructure that either add business value, or speed up their processes, or that save some money.

 

Gardner: And this explosive growth isn’t replacing a traditional IT. We still need to have the data centers that we’re running now performing what they’re performing. This is not a rip and replace by any stretch. This is an add-on and perhaps even a different type of infrastructure requirement given the high energy density, total power, and resulting heat requirements.

We're already seeing evidence of jurisdictions looking at the increasing power and water demand ... so they can understand the implications on utilities and infrastructure of these new AI workloads.

Frey: Absolutely. In fact, we constantly have customers coming to us asking both how does this supplement the existing technology workloads that they are already running, and what do they need to change in terms of the infrastructure to run these new workloads in the future?

 

Gardner: John, we’re seeing different countries approach these questions in different ways. We do not have a clean room approach to deploying AI. We have it going into the existing public infrastructure that serves cities, countries, and rural localities.

 

And so, how important is it to consider the impact -- not just from an AI capabilities requirement -- but from a societal and country-by-country specific set of infrastructure requirements?

 

Frey: That’s a great question. We’re already seeing evidence of jurisdictions looking at the increasing power demand, and also water demand.  Some are either slowing down the implementation or even pausing the implementation for a period of time so that they can truly understand the implications on the utilities and infrastructure that these new AI workloads are going to have.

 

Gardner: And, of course, the some countries and regulatory agencies are examining how sustainable our overall economy is, given the amount of carbon and record-breaking levels still being delivered into the atmosphere.

Rethink Sustainability 

As a Productivity Catalyst

Frey: Absolutely, and that has been a constant focus. Certainly, technology like AI brings that front and center from both a power and water perspective, but also from a social good perspective.

If you think in the broadest use of the term sustainability, that’s what those jurisdictions are looking at. And so, we’re going to see new permitting processes, I predict. We’re also going to see more regulatory action.

Gardner: John, there’s been creep over the years as to what AI entails and includes -- from traditional analytics and data crunching, to machine learning (ML), and now the newer large language models and their ongoing inference demands. We’re also looking at tremendous amounts of data, and the requirement for more data, as another important element of the burgeoning demand for more resources.

 

How important is it for organizations to examine the massive data gathering, processing, and storing requirements -- in addition to the AI modeling aspects -- as they seek resources for sustainability.

 

Data efficiency for effectiveness

 

Frey: It’s vital. In fact, when we think about how HPE looks at sustainable IT broadly, data efficiency is the first place we suggest users think about improvement. From an AI perspective, it’s increasingly about minimizing the training sets of data used to train the models.

For example, if you’re using off-the-shelf data sets, like from crawls of the entire internet around the globe, and if your solutions are only going to operate in English, you can instantly discard the data that have been collected that aren’t in the English language. If, for example, you’re building a large language model, you don’t need the HTML and other programming code that a crawler probably grabbed as well.

Getting the data pull right in the first place, before you do the training, is a key part of sustainable AI, and then you can use only your customer’s specific data as you tune that model as well.

 

By starting first with data efficiency -- and getting that data population as concise as it can be from the early stages of the process -- then you’re driving efficiency all the way through.

 

Gardner: So being wise with your data choices is an important first step for any AI activity. Do you have any data points on how big of an impact data and associated infrastructure demands for AI can have?

 

Frey: Yes, for the latest large language models that many people are using or familiar with, such as GPT-4, there’s been research looking at the reported infrastructure stack that was needed to train that. They’ve estimated more than 50 gigawatt hours of energy were consumed during the training process. And that training process, by the way, was believed to be somewhere on the order of about 95 days.

Now to put that level of power in perspective, that is about the same power that 2,700 U.S. homes consume for a year using the US Environmental Protection Agency’s (EPA’s) equivalency model. So, there’s a tremendous amount of energy that goes into the training process. And remember, that’s only in the first 95 days of training for that model. Then the model can be used for multiple years with people running inference problems against it.

In the same way, we can look at the water consumption involved. There are often millions of gallons of water used in the cooling during such a training run. Researchers also predicted that running a single 5- to 20-variable problem, or doing inference with 5- to 20-variables, results in a water consumption for each inference run of about 500 milliliters, or a 16-ounce bottle of water, as part of the needed cooling. If you have millions of users running millions of problems that each require an inference run, that’s a significant amount of water in a short period of time.

 

Gardner: These impacts then are on some of our most precious and valuable commodities: carbon, water, and electricity. How is the infrastructure needed to support these massive AI undertakings different from past data centers? Do such things as energy density per server rack or going to water instead of air cooling need to be re-examined in the new era of AI?

 

Get a handle on AI workloads

 

Frey: It’s a great question. Part of the challenge, and why it comes up so much, is as we think about these new AI workloads, the question becomes, “Can our existing infrastructure and existing data centers handle that?”

 

Several things that we think are pushing us to consider either new facilities or new co-location sites are such issues as rack density going up. Global surveys look at rack densities and the most commonly reported rack density today is about four to six kilowatts per rack. Yet we know with AI training systems, and even inference systems, that those rack densities may be up in the 20, 30, to all the way up to 50 kilowatts per rack.

 

Many existing IT facilities aren’t made to handle that power density at all. The other thing we know is many of the existing facilities continue to be air-cooled. They’re taking in outside air, cooling it down and then providing that to the IT equipment to remove the heat. We know that when you start getting above 20 kilowatts per rack or so, air cooling is less effective against some of those high-heat-producing workloads. You really may need to make a shift to direct liquid cooling.

 

And again, what we find is so many data centers that exist today, whether they’re privately owned or in a co-location space, don’t have the capability for the liquid cooling that’s required. So that’s going to be another needed change.

We have higher densities, higher heat generation, and so need more effective cooling. These are driving a need for our infrastructure to change in the future.

And then the third thing here is the workloads are running both the training and the inference and so often they have accelerators in them. We’re seeing the critical temperature that those accelerators -- along with the central processing units (CPUs) – run at have to be kept below certain thresholds to run most effectively, and that is actually dropping.

 

At the same time, we have higher densities, higher heat generation, and therefore, need for more effective cooling. The required critical temperature of the most critical devices is dropping. These three elements put together are really what’s driving a tremendous amount of the data that calls for our infrastructure to change in the future.

 

Gardner: And this is not going to just impact your typical global 2000 enterprise’s on-premises data centers. This is going to impact co-location providers, various IT service providers, and the entire ecosystem of IT infrastructure-as-a-service (IaaS) providers.

 

Frey: Yes, absolutely. I will say that many of these providers have already started the transition for their normal, non-AI workloads as server efficiency has dramatically improved, particularly in terms of performance per watt and as rack densities have grown.

 

One of the ways that co-location providers charge their customers is by space used, and another way is by power consumption. So, if you’re trying to do as much work as possible for the same watt of power -- and you’re trying to do it in the smallest footprint possible -- you naturally will raise rack densities.

So, this trend has already started, but AI accelerates the trend dramatically.


Gardner: It occurs to me, John, that for 30 or more years, there was a vast amount of wind in the sails of IT and its evolution in the form of Moore’s law. That benefit of the processor design improving its efficiency, capability, and to scale rapidly over time was often taken for granted in the economics of IT in general. And then, for the last 5 to 10 years, we’ve had advances in virtualization and soaring server utilization improvements. Then massive improvements in data storage capacities and management efficiencies were added.

 

But it now seems that even with all of that efficiency and improved IT capabilities, that we’re going in reverse. We face such high demands and higher costs because of AI workloads that the cost against value is rapidly rising and demands more of our most expensive and precious resources.

 

Do we kiss goodbye any notion of Moore’s law? How long can true escalating costs continue for these newer compute environments?

 

Time to move on from Moore’s Law?

 

Frey: Those of us who are technologists, of course, we love to find technology solutions to challenges. And as we’ve pushed on energy efficiency and performance per watt, we have seen and predicted in many cases an end to Moore’s law.

 

But then we find new ways to develop higher functioning processors with even better performance. We haven’t hit thresholds there that have stopped us yet. And I think that’s going to continue, we will grow performance per watt. And that’s what all of the processor vendors are pushing for, on improving that performance per watt equation.


That trajectory is going to continue into the near future, at least. At the same time, though, when we think more broadly, we have to focus on energy efficiency, so we literally consume less power per device.

 

But as you look at human behavior over the past two decades, every time we’ve been able to save energy in one place, it doesn’t mean that overall demand drops. It means that people get another device that they can’t live without.

Empowering Sustainable IT 

For example, we all now have cell phones in our pockets, which two decades ago we didn’t even know we needed. And now, we have tablets and laptop computers and the internet and all of the things that we have come to not be able to live without.

It’s gotten to the point that every time we drive these power efficiencies, there are new uses for technology -- many of which, by the way, decarbonize other processes. So, there’s a definite benefit there. But we always have to weigh that.

 

Is a technology solution always the right way to solve a challenge? And what are the societal and environmental impacts of that new technology solution so that we can factor and make the best decisions?

 

Gardner: In addition to this evolution of AI technology toward productivity and per watt efficiency, there are also market factors involved. If the total costs are too high, then the marketplace won’t sustain the AI solution on a cost-benefit basis. And so, as a business, if you’re able to reduce cost as the only way to make solutions viable, that’s going to be what the market demands, and what your competitors are going to force on you, too.

The second market forces pillar is the compliance and regulatory factor. In fact, in May of 2024, the European Union Energy Efficiency Directive kicks in. And so, there are powerful forces around total costs of AI supply and consumption that we don’t have much choice over, that are compelling facts of life.

Frey: Absolutely. In fact, one of the things we’re seeing in the market is a tremendous amount of money being spent to develop some AI technologies. That comes with really hard questions about what’s a proper return on investment (ROI) for that initial money spent to build and train the models. And then, can we further prove out the ROI over the long-term?

 

Our customers are now wisely asking those very questions. We’re also, from an HPE perspective, making sure that customers think about the ethical and societal consequences of these AI solutions. We don’t want customers bringing AI solutions to market and having an unintended consequence from a bias that’s discovered, or some other aspect around privacy and cybersecurity that they had not considered when they built the solution.

 

And, to your point, there is also increasing interest in how to contend with regulatory constraints for AI solutions as well.

 

Gardner: So, one way or another, you’re going to be seeking a fit-for-purpose approach to AI implementations -- whether you want to or not. And so, you might as well start on that earlier than later.

 

Let’s move now toward ways that we can accomplish what we’ve been describing in terms of keeping the AI services costs down, the energy demand down, and making sure that the business benefits outweigh the total and real costs.

 

What are some ways that HPE -- through your research, product development, and customer experiences -- is driving toward general business sustainability and transformation? How can HPE be leveraged to improve and reduce risk specifically around the AI transformation journey?

 

Five levers for moving sustainably

 

Frey: One of the things that we’ve learned in 22 years or so of working with customers specifically on sustainable technology broadly is we’ve discovered five levers. And we intentionally call them “levers” because we believe that all of them apply to every customer, whether they have their IT workloads in the public cloud, a hybrid or private cloud, a bare-metal environment, or whether they are on-premises, co-location, or even out on the edge.

 

We know that they can drive efficiencies if customers consider these levers. And those five are first data efficiency, which we’ve talked about a little bit already. From the AI context, it’s first about making sure that the data sets that you’re using are optimized before running the training.

 

When we process a bit of data in a training environment, for example, do we avoid processing it again if we can? And how do we make sure that any data that we’re going to train for, or derive from an inference, actually has a use? Does that data provide a business value?

From the AI context,data efficiency is about optimizing the data sets you're using to make sure it provides a business value. Making the right decisions on storage and data flows down through the other aspects of sustainability.

Next, if we’re going to collect data, how do we make sure that we make an intentional decision on the front end about how long we’re going to store that data, and how we’re going to store it? What types of storage? Is it something they will need instantaneously? And we can choose from high availability storage or go all the way down to tape storage if it’s more of an archival or regulatory requirement to keep that data for a long period of time. So, data efficiency is where we suggest we start, because making the right decisions there flows down through all of the other aspects.

 

The second lever is software efficiency and this, from a broader technology perspective, is focused on writing more efficient software applications. How do we reduce the carbon intensity of software applications? And how do we use software to drive efficiency?

 

From an AI perspective, this gets into model development. How do we develop more efficient models? How do we design these models, or leverage existing models, to be as efficient as possible and to use the least amount of compute capability, storage capability, and networking capability to operate most efficiently?

 

Software efficiency even includes things such as the efficiency of the coding. Can it be in a compiled language versus a non-compiled language, and so it takes less power and CPU capability to run that software as well? And HPE brings many tools to the market in that environment.

 

Next, how do we use software to drive efficiency? Some of the things we’re seeing lots of interest in with AI are things like predictive maintenance and digital twins, where we can actually use software tools to predict things like maintenance cycles or failures, even things like inferring operating and buying behaviors. We see these used in terms of the design of data centers. How do we shift workloads for most efficient and lowest carbon operation? All of those aspects are in software efficiency.

 

And then we move to the hardware stack and that means equipment efficiency. When you have a piece of technology equipment, can you have it do the most amount of work? We know from global industry surveys that technology equipment is often very underutilized. For a variety of reasons, there’s redundancy and resiliency built into the solutions.

But as we begin moving more into AI, we tend to look at hardware and software solutions that deliver high levels of availability across the equipment infrastructure. On one hand, by its very nature, AI is designed to run this equipment at higher levels of utilization. And there is huge demand, particularly in terms of training, on single large workloads that run across a variety of devices as well. But equipment efficiency is all about attaining the highest levels of utilization.

Then, we move to energy efficiency. And this is about how to do the most amount of work per input watt of power so that the devices are as high performing as possible. We tend to call that being energy effective. Can you do the most amount of work with the same input of energy?

 

And, from an AI perspective, it’s so critical because these systems consume so much power that often we’re able to easily demonstrate the benefits for an input watt of power or volume of water that we’re using from a cooling perspective.

 

And finally, resource efficiency, and that’s about how do we run technology solutions so that they need the least number of various resources. Those include auxiliary cooling or power conversions, or even the human resources that it takes to run these solutions.


So, from an AI context, again, we’ve talked about raising power densities and how we can shift directly from air to water. Cooling is going to be so critical. And it turns out that as you move to direct liquid cooling, that has a much lower power percentage compared to some of our air-cooled infrastructure. You can drop your power consumption dramatically by moving to direct liquid cooling.

How Digital Transformation Benefits 

It's the same way from a staffing perspective. As you begin having analytics that allow you to monitor all these variables across your technology solutions -- which is so common in an AI solution – you need fewer staff to run those solutions. You also gain higher levels of employee satisfaction because they can see how the infrastructure is doing and a lot of the mundane tasks, such as constant tuning, are being made more efficient.

Gardner: Well, this drive for sustainability is clearly a non-trivial undertaking. Obviously, when planning out efficiencies across entire data centers, it continues over many years, even decades.

 

It occurs to me, John, that smaller companies that may want to do AI deployments themselves -- to customize their models and their data sets for particular uses – and so to develop proprietary and advantage-based operations, they are going to be challenged when it comes to achieving AI efficiently.

 

At the same time, the large hyperscalers, which are very good at building out efficient data center complexes around the globe, may not have the capability to build AI models at the granular level needed for the vertical-industry customization required of smaller companies.

 

So, it seems to me that an ecosystem approach is going to shake out where these efficiencies are going to need to manifest. But it’s not all going to happen at the company-by-company level. And it can’t necessarily happen at the cloud provider-by-cloud provider level either.

 

Do you have any sense of how we should expect an AI services ecosystem – one that reaches a balance between needed customization and needed efficiency at scale – will emerge that can take advantage of these essential efficiency levers you described?

 

An AI ecosystem evolves

 

Frey: Yes, exactly what you describe is what we see happening. We have some customers that want to make the investments in high-performance computing and in the development and training of their own AI solutions. But those customers are very few that want to make that type of investment.

 

Other customers want to access an AI capability and either have some of that expertise themselves or they want to leverage a vendor such as HPE’s expertise from a data science perspective, from a model development perspective, and from a data efficiency perspective. We certainly see a lot more customers that are interested in that.

 

And then there’s a level above that. Customers that want to take a pre-trained model and just tune it using their own specific data sets. And we think that segment of the population is even broader because so many highly valuable uses of AI still require training on task-specific or organization-specific data.

 

And finally, we see a large range of customers that want to take advantage of pre-trained, pre-tuned AI solutions that are applicable across an entire industry or segment of some kind. One of the things that HPE has found over the years as we’ve built all portions of that stack and then partnered with companies is that having that whole portfolio, and having the expertise across them, allows us to look both downstream and upstream to what the customer is looking at. It allows us to help them make the most efficient decisions because we look across the hardware, software, and that entire ecosystem of partners as well.

 

It does, in our mind, allow us to leverage decades worth of experience to help customers attain the most efficient and most effective solutions when they’re implementing AI.

 

Gardner: John, are there any leading use cases or examples that we can look to that illustrate how such efficiency makes an impactful difference?

 

Examples of increased AI productivity

 

Frey: Yes. I’ll give you just a couple of examples. Obviously, an early adopter of some types of AI systems have been in healthcare. A great example is x-rays and looking at x-rays. It turns out, with ML, you can actually do a pretty good job of having an ML system look at x-rays, do scanning, and make a decision. “Is that a bone fracture or not?” for example. And if it’s unsure, pass that to a radiologist who can take a deeper look. You can tune the system very, very well.

 

There’s a large population of x-ray imagery that creates some very clear examples of something that is a fracture or something that is not, for example. There have been lots of studies looking at how these systems perform against the single radiologist looking at these x-rays as well.

We want to train tools that can answer basic customer questions or allow the customer to interact from a voice perspective. In some cases we can give the right answer in both voice and typed speech. 

Particularly, when a radiologist spends their day going from x-ray to x-ray to x-ray, there can be some fatigue associated with that, so their diagnostic capabilities get better when the system does a first-level screen and then passes the more specific cases to the radiologist for a deeper analysis. If there is something that’s not really clear one way or the other, it lets the radiologist spend more time on it. So, that’s a great one.

 

We’re seeing a lot of interest in manufacturing processes as well. How do we look at something using video and video analytics to examine parts or final assemblies coming off of an assembly line and say, “Does this appear the way it’s supposed to from a quality perspective?” “Are there any additional components or are there any components missing,” for example.

 

It turns out those use cases actually do a really good job from a power performance perspective and from a ROI perspective. If you dive deeper, into natural language processing (NLP), we want to train tools that can answer basic customer questions or allow the customer to interact from a voice perspective with a service tool that can provide low-level diagnostics for a customer or route, for example. In some cases, it can even give them the right answer in both voice and typed speech.

 

In fact, you’re now seeing some of those come out in very popular software applications that a variety of people around the world use. We’re seeing AI systems that predict the next couple words in a sentence, for example, or allow for a higher level of productivity. I think those again are still proving their case.

 

In some cases, users see them as a barrier, not as an assistant, but I think the time will come, as those start getting more and more accurate, when they’re going to be really useful tools.

 

Gardner: Well, it certainly seems that given the costs and the impacts on carbon load, on infrastructure, on the demand for skills to support it, that it’s incumbent on companies big and small to be quite choosy about which AI use cases and problems they seem to solve first. This just can’t be a solution in search of a problem. You need to have a very good problem that will deliver very good business results.

 

It seems to me that businesses should carefully evaluate where they devote resources and use these intelligence capabilities to the fullest effect and pick those highly productive use cases and tasks earlier rather than later.

 

Define a sustainable IT strategy

 

Frey: Yes, absolutely. Let’s not have a solution in search of a problem. Let’s find the best business challenges and opportunities to solve, and then look at what the right strategic approaches to solving them. What’s the ROI for each of those solutions? What are some of the unintended consequences, like a privacy issue or a bias issue, that you want to prevent? And then, how do we learn from others that have implemented those tools and partner with vendors that have a lot of historical competencies in those topics and have had many customers bring those solutions to market.

 

So, it’s really finding the best solution for the business challenge and being able to quantify that benefit. One of the things that we did really early on as we were developing our sustainable IT approach is to recognize that so many customers didn’t know how to get started.

We offered a free workbook for the customers called, Six Steps for Developing a Sustainable IT Strategy. Well, one of the things that it says -- and this is in the majority of AI conversations as well – is that the customer couldn’t measure the impact of what they had today because they didn’t have a baseline. So, they implemented a technology solution and then said, “That must be much better because we’re using technology.” But without measuring the baseline, they weren’t able to quantify the financial, environmental, and carbon implications of the new solution.

 

We help customers along this journey by helping them think about this strategically, to get all the appropriate organizations within their company that need to be part of making a decision about these solutions together. For example, if you’re worried about cybersecurity implications, make sure the cybersecurity team is part of this project team. If you’re worried about bias implications, make sure that your legal teams are involved and anyone else it’s looking at employee or customer privacy. If you’re thinking about solutions that are going to decarbonize or save power, for example, make sure you have your global workplace teams involved and help quantify that, and your sustainability teams if you’re going to talk about carbon mitigation as part of all of this.

 

It’s about having the right organizations involved, looking at all the issues that can help make the decisions, and examine if the solution really is sustainable. Does it have both a financial and an environmental ROI that makes sense?

 

Gardner: It sure seems that emphasizing AI sustainability should be coming from the very top of the organization, any organization, and in very loud and impressive terms. Because as AI becomes a core competency -- whether you source it or do it in-house -- it is going to be an essential business differentiator. If you’re going to do AI successfully, you’re going to need to do it sustainably. And so, AI sustainability seems to be a pillar of getting to an AI outcome that works long-term for the organization.

 

As we move to the end of our very interesting discussion, John, what are some resources that people can go to? How can they start to consider what they’ve been doing around sustainability and extending that into AI, or examine what they’re doing with AI and make sure that it conforms to the concepts around sustainability and the all-important objectives of efficiency?

 

Frey: The first one, which we’ve already talked about, is make sure you have a sustainable IT strategy. It’s part of your overarching technology strategy. And now that it includes AI, it really gets accelerated by AI workloads.

 

Part of that strategy is getting stakeholders together so that folks can help look for the blind spots and help quantify the implications and the opportunities. And then, look across the entire environment -- from public cloud to edge, hybrid cloud, and private cloud in the middle -- and look to those five levers of efficiency that we talked about. In particular, emphasize data efficiency and software efficiency from an AI perspective.


And then, look at it all across the lifecycle, from the design of those products to the return and the end-of-life processes. Because when we think about IT lifecycles, we need to consider all of the aspects in the middle.

Six Steps for Developing 

A Sustainable IT Strategy

That drives such things as how do you procure the most efficient hardware in the first place and provide the most efficient solutions? How do you think about tech refresh cycles and why are tech refresh cycles different for compute, storage, networking, and with AI? How do all those pieces interconnect to  impact tech refresh cycles?

And from an HPE perspective, one of the things that we’ve done is published a whole series of resources for customers. We mentioned the Six Steps for Developing a Sustainable IT Strategy workbook. But we also have specific white papers as well on software efficiencydata efficiencyenergy efficiency, equipment efficiency, and resource efficiency.

 

We make those freely available on HPE’s website. So, use the resources that exist, partner with vendors that have core capability and core expertise across all of these areas of efficiency, and spend a fair amount of time in the development process trying to ensure that that ROI both financially and from a sustainability perspective are as positive as possible when implementing these solutions.

Gardner: I’m afraid we’ll have to leave it there. We’ve been exploring how AI deployments will demand vast resources -- energy, water, skills, and upgraded or wholly new data center and electrical grid infrastructures.

 

And we’ve learned that not only by calculating the true and total expected costs in the fullest sense can businesses predict the proper fit-for-purpose use in deployments of AI but doing it in a most efficient manner might be the only way to go about successful AI deployments.

 

And so, please join me in thanking our guest, John Frey, Director and Chief Technologist for Sustainable Transformation at HPE. Thank you so much, John.

 

Frey: Thank you for letting me come on and share this expertise.

 

Gardner: You bet. And thanks as well to our audience for joining this sponsored BriefingsDirect discussion on the best path to sustainable AI.

 

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening. Please pass this along to your IT community and do come back next time.

 

Listen to the podcast. Subscribe to the podcast. Download the transcript. Sponsor: Hewlett Packard Enterprise.


Transcript of a discussion on how energy use and resources management have emerged as key ingredients of artificial intelligence adoption success -- or failure. Copyright Interarbor Solutions, LLC, 2005-2024. All rights reserved.


You may also be interested in: