Thursday, September 08, 2016

How Always-Available Data Forms the Digital Lifeblood for a University Medical Center

Transcript of a discussion on how adopting storage innovation protects a large hospital from data disruption and adds operational benefits.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition to the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on technology innovation -- and how it's making an impact on people's lives.

Gardner
Our next digital business transformation case study examines how the Nebraska Medical Center in Omaha consolidated and unified its data-protection capacities. We'll learn how adopting storage innovation protects the state's largest hospital from data disruption and adds operational simplicity to complex data lifecycle management.

To describe how more than 150 terabytes of data remain safe and sound, we're joined by Jeff Bergholz, Manager of Technical Systems at The Nebraska Medical Center in Omaha. Welcome, Jeff.

Jeff Bergholz: Glad to be here.
HPE Data Protector:
Backup with Brains
Learn More Here
Gardner: Tell us about the major drivers that led you to seek a new backup strategy as a way to keep your data sound and available no matter what.

Bergholz: At Nebraska Medicine, we consist of three hospitals with multiple data centers. We try to keep an active-active data center going. Epic is our electronic medical record (EMR) system, and with that, we have a challenge of making sure that we protect patient data as well as keeping it highly available and redundant.

We were on HPE storage for that, and with it, were really only able to do a clone-type process between data centers and keep retention of that data, but it was a very traditional approach.

Bergholz
A couple of years ago, we did a beta program with HPE on the P6200 platform, a tertiary replica of our patient data. With that, this past year, we augmented our data protection suite. We went from license-based to capacity-based and we introduced some new D2D dedupe devices into that, and StoreOnce as well. What that affords us is to easily replicate that data over to another StoreOnce appliance with minimal disruption.

Part of our goal is to keep backup available for potential recovery solutions. With all the cyber threats that are going on in today's world, we've recently increased our retention cycle from 7 weeks to 52 weeks. We saw and heard from the analysts that the average vulnerability sits in your system for 205 to 210 days. So, we had to come up with a plan for what would it take to provide recovery in case something were to happen.

We came up with a long-term solution and we're enacting it now. Combining HPE 3PAR storage with the StoreOnce, we're able to more easily move data throughout our system. What's important there is that our backup windows have greatly been improved. What used to take us 24 hours now takes us 12 hours, and we're able to guarantee that we have multiple copies of the EMR in multiple locations.

We demonstrate it, because we're tested at least quarterly by Epic as to whether we can restore back to where we were before. Not only are we backing it up, we're also testing and ensuring that we're able to reproduce that data.

More intelligent approach

Gardner: So it sounds like a much more intelligent approach to backup and recovery with the dedupe, a lower cost in storage, and the ability to do more with that data now that it’s parsed in such a way that it’s available for the right reason at the right time.

Bergholz: Resource wise, we always have to do more with less. With our main EMR, we're looking at potentially 150 terabytes of data in a dedupe that shrinks down greatly, and our overall storage footprint for all other systems were approaching 4 petabytes of storage within that.

We've seen some 30:1 decompression ratios within that, which really has allowed my staff and other engineers to be more efficient and frees up some of their time to do other things, as opposed to having to manage the normal backup and retention of that.

We're always challenged to do more and more. We grow 20-30 percent annually, and by having appropriate resources, we're not going to get 20 to 30 percent more resources every year. So, we have to work smarter with less and leverage the technologies that we have.

Gardner: Many organizations these days are using hybrid media across their storage requirements. The old adage was that for backup and recovery, use the cheaper, slower media. Do you have a different approach to that and have you gone in a different direction?

Bergholz: We do, and backup is as important to us as our data that exists out there. Time and time again, we’ve had to demonstrate the ability to restore in different scenarios, the accepted time of being able to restore and provide service back. They're not going to wait for that. When clinicians or caregivers are taking care of patients, they want that data as quickly as possible. While it may not be the EMR, it maybe some ancillary documents that they need to be able to get in order to provide better care.
We're able, upon request, to enact and restore in 5-10 minutes. In many cases, once we receive a ticket or a notification, we have full data restoration within 15 minutes.

We're able, upon request, to enact and restore in 5 to 10 minutes. In many cases, once we receive a ticket or a notification, we have full data restoration within 15 minutes.

Gardner: Is that to say that you're all flash, all SSD, or some combination? How did you accomplish that very impressive recovery rate?

Bergholz: We're pretty much all dedupe-type devices. It’s not necessarily SSD, but it's good spinning disk, and we have the technology in place to replicate that data and have it highly available on spinning disk, versus having to go to tape to do the restoration. We deal with bunches of restorations on a daily basis. It’s something we're accustomed to and our customers require quick restoration.

In a consolidated strategic approach, we put the technology behind it. We didn’t do the cheapest, but we did the best sort of thing to do, and having an active-active data center and backing up across both data centers enables us to do it. So, we did spend money on the backup portion because it's important to our organization.

Gardner: You mentioned capacity-based pricing. For those of our listeners and readers who might not be familiar with that, what is that and why was that a benefit to you?

Bit of a struggle

Bergholz: It was a little bit of a struggle for us. We were always traditionally client-based or application-based in the backup. If we needed Microsoft Exchange email boxes we had to have an Exchange plug-in. If we had Oracle, we had to have an Oracle plug-in, a SQL plug-in.

While that was great, it enabled us to do a lot, it we were always having to get another plug-in thing to do it. When we saw that with our dedupe compression ratios we were getting, going to a capacity-based license allowed us to strategically and tactically plan for any increase that we were doing within our environment. So now, we can buy in chunklets and keep ahead of the game, making sure that we’re effective there.

We're in throes of enacting archive-type solution through a product called QStar, which I believe HPE is OEM-ing, and we're looking at that as a long-term archive-type process. That's going to a linear tape file system, utilizing the management tools that that product brings us to afford the long-term archive of patient information.

Our biggest challenge is that we never delete anything. It’s always hard with any application. Because of the age of the patient, many cases are required to be kept for 21 years; some, 7 years; some, 9 years. And we're a teaching hospital and research is done on some of that data. So we delete almost nothing.
HPE Data Protector:
Backup with Brains
Learn More Here
In the case of our radiology system, we're approaching 250 terabytes right now. Trying to backup and restore, that amount of data with traditional tools is very ineffective, but we need to keep it forever.

By going to a tertiary-type copy, which this technology brings us, we have our source array, our replicated array, plus now, a tertiary array to take that, too, which is our LTFS solution.

Gardner: And with your backup and recovery infrastructure in place and a sense of confidence that comes with that, has that translated back into how you do the larger data lifecycle management equation? That is to say, are there some benefits of knowledge of quality assurance in backup that then allows people to do things they may not have done or not worried about, and therefore have a better business transformation outcome for your patients and your clinicians?
Being able to demonstrate solutions time and time again buys confidence through leadership throughout the organization and it makes those people sleep safer at night.

Bergholz: From a leadership perspective, there's nothing real sexy about backup. It doesn’t get oohs and ahs out of people, but when you need data to be restored, you get the oohs and ahs and the thank-yous and the praise for doing that. Being able to demonstrate solutions time and time again buys confidence through leadership throughout the organization and it makes those people sleep safer at night.

Recently, we passed HIMSS Level 7. One of the remarks from that group was that a) we hadn’t had any production sort of outage, and b) when they asked a physician on the floor, what do you do when things go down, and what do you do when you lose something? He said the awesome part here is that we haven’t gone down and, when we lose something, we're able to restore that in a very timely manner. That was noted on our award.

Gardner: Of course, many healthcare organizations have been using thin clients and keeping everything at the server level for a lot of reasons, a edge to core integration benefit. Would you feel more enabled to go into mobile and virtualization knowing that everything that's kept on the data-center side is secure and backed up, not worrying about the fact that you don't have any data on the incline? Is that factored into any of your architectural decisions about how to do client decision-making?

Desktop virtualization

Bergholz: We have been in the throes of desktop virtualization. We do a lot of Citrix XenApp presentations of applications that keeps the data in a data center and a lot of our desktop devices connect to that environment.

The next natural progression for us is desktop virtualization (VDI), ensuring that we're keeping that data safe in the data center, ensuring that we're backing it up, protecting the patient information on that, and it's an interesting thought and philosophy. We try to sell it as an ROI-type initiative to start with. By the time you start putting all pieces to the puzzle, the ROI really doesn't pan out. At least we've seen in two different iterations.

Although it can be somewhat cheaper, it's not significant enough to make a huge launch in that route. But the main play there, and the main support we have organizationally, is from a data-security perspective. Also, it's the the ease of managing the virtual desktop environment. It frees up our desktop engineers from being feet on the ground, so to speak, to being application engineers and being able to layer in the applications to be provisioned through the virtual desktop environment.
The next natural progression for us is desktop virtualization (VDI), ensuring that we're keeping that data safe in the data center, ensuring that we're backing it up, protecting the patient information on that.

And one important thing in the healthcare industry is that when you have a workstation that has an issue and requires replacement or re-imaging, that’s an invasive step. If it’s in a patient room or in a clinical-care area, you actually have to go in, disrupt that flow, put a different system in, re-image, make sure you get everything you need. It can be anywhere from an hour to a three-hour process.

We do have a smattering of thin devices out there. When there are issues, it’s merely just replaying or redoing a gold image to it. The great part about thin devices versus thick devices is that in lot of cases, they're operating in a sterile environment. With traditional desktops, the fans are sucking air through infection control and all that; there's noise; perhaps they're blowing dust within a room, if it's not entirely clean. SSD devices are a perfect-play there. It’s really a drop-off, unplug, and re-plug sort of technology.

We're excited about that for what it will bring to the overall experience. Our guiding principle is that you have the same experience no matter where you're working. Getting there from Step A to Step Z is a journey. So, you do that a little bit a time and you learn as you go along, but we're going to get there and we'll see the benefit of that.

Gardner: And ensuring the recovery and voracity of that data is a huge part of being able to make those other improvements.

Bergholz: Absolutely. What we've seen from time to time is that users, while they're fairly knowledgeable, save their documents where they save them to. Policy is to make sure you put them within the data center. That may or may not always be adhered to. By going to a desktop virtualization, they won’t have any other choice.

A thin client takes that a step further and ensures that nothing gets saved back to a device, where that device could potentially disappear and cause a situation.

We do encrypt all of our stuff. Any device that's out there is covered by encryption, but still there's information on there. It’s well-protected, but this just takes away that potential.

Gardner: I'm afraid we'll have to leave it there. We've been learning how the Nebraska Medical Center in Omaha consolidated and unified its data protection capacities, and we've heard how adopting storage innovation protects the state's largest hospital from any data disruption and adds operational benefits along the transformational architecture for edge to core data protection.
HPE Data Protector:
Backup with Brains
Learn More Here
So please join me in thanking our guest, Jeff Bergholz, Manager of Technical Systems at The Nebraska Medical Center in Omaha. Thank you, Jeff.

Bergholz: Thank you.

Gardner: And thanks as well to our audience for joining us for this Hewlett Packard Enterprise Voice of the Customer podcast. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored technology innovation discussions. Thanks again for listening, and do please come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how adopting storage innovation protects a large hospital from data disruption and adds operational benefits. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Tuesday, August 30, 2016

Loyalty Management Innovator Aimia's Transformation Journey to Modernized and Standardized IT

Transcript of a discussion on how improving end user experiences and using big data analytics helps head off digital disruption and improve core operations.


Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition to the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT Innovation -- and how it's making an impact on people's lives.

Gardner
Our next digital business transformation case study examines how loyalty management innovator Aimia is modernizing, consolidating, and standardizing its global IT infrastructure. As a result of rapid growth and myriad acquisitions, Montreal-based Aimia is in a leapfrog mode -- modernizing applications, consolidating data centers, and adopting industry standard platforms.

We'll now learn how improving end-user experiences and leveraging big data analytics helps IT organizations head off digital disruption and improve core operations and processes.

To describe how Aimia is entering a new era of strategic IT innovation, we're joined by André Hébert, Senior Vice President of Technology at Aimia in Montreal. Welcome, André.
Hébert: Thank you.

Gardner: What are some of the major drivers that have made you seek a common IT strategy? And tell us about your organization and why having a common approach is now so important.

Hébert: If you go back in time, Aimia grew through a bunch of acquisitions. We started as Aeroplan, Air Canada's frequent flyer program and decided to go in the loyalty space. That was the corporate strategy all along. We acquired two major companies, one in the UK and one that was US-based, which gave us a global footprint. As a result of these acquisitions, we ended up with quite a large IT footprint worldwide and wanted to look at ways of globalizing and also consolidating our IT footprint.

Hébert
Gardner: For many people, when they think of a loyalty program, it's frequent flyer miles, perhaps points at a specific retail outlet, but this varies quite a bit market to market around the globe. How do you take something that's rather fractured as a business and make it a global enterprise?

Hébert: We've split the business into two different business units. The first one is around coalition loyalty. This is where Aimia actually runs the program. Good examples are Aeroplan in Canada or Nectar in the UK, where we own the currency, we operate the program, and basically manage all of the coalition partners. That's one side.

The other side is what we call our global loyalty solutions. This is where we run loyalty programs for other companies. Through our standard technology, we set up a technology footprint within the customer site or preferably in one of our data centers and we run the technology, but the program is often white-labeled, so Aimia's name doesn't appear anywhere. We run it for banks, retailers and many industry verticals.

Almost like money

Gardner: You mentioned the word currency, and as I think about it, loyalty points are almost like money -- it is currency -- it can be traded, and it can be put into other programs. Tell us about this idea. Are you operating almost like a bank or a virtual currency trader of some sort?

Hébert: You could say that the currency is like money. It is accumulated. If you look at our systems, they're very similar to bank-account systems. So our systems are like banks'. If you look at debit and credit transactions, they mimic the accumulation and redemption transactions that our members do. 

Gardner: That's pretty important when it comes to transactions, making sure integration works among systems. Let's look at this from the perspective of your challenge. As you say, you came together through a lot of acquisitions. What's been your challenge from an IT perspective to allow your company to thrive in this digital economy, given that there is a transactional integrity issue, but also a lot of disparity in terms of the types of systems and heterogeneity in systems?

Hébert: Our biggest challenge was how large the technology footprint was. We still operate many dozens of data centers across the globe. The project with HPE is to consolidate all of our technology footprint into four Tier 3 data centers that are scattered across the globe to better serve our customers. Those will benefit from the best security standards and extremely robust data-center infrastructure. 

On the infrastructure side, it's all about simplifying, consolidating, virtualizing, using the cloud, leveraging the cloud, but in a virtual private way, so that we also keep our data very secured. That's on the infra side.

On the application side, we probably have more applications than we have customers. One of the big drivers there is that we have a global product strategy. Several loyalty products have now been developed. We're slowly migrating all of our customers over to our new loyalty systems that we've created to simplify our application portfolios. We have a large number of applications today, and the plan is to try to consolidate all these applications into key products that we've been developing over the last few years.
We've shopped around for a partner that can help us in that space and we thought that HPE had the best credentials, the best offer for us to go forward.

Gardner: That’s quite a challenge. You're modernizing and consolidating applications. At the same time, you're consolidating and modernizing your infrastructure. It reminds me of what HPE did just a few years ago when it decided to split and to consolidate many data centers. Was that something that attracted you to HPE, that they have themselves gone through a similar activity?

Hébert: Yes, that is one of the reasons. We've shopped around for a partner that can help us in that space and we thought that HPE had the best credentials, the best offer for us to go forward. 

Virtual Private Cloud (VPC), a solution that they have offered, is both innovative, yet it is virtual and private. So, we feel that our customer’s data will be significantly more secure than just going to any public cloud.

Gardner: Other key issues for you are data privacy and security. Again, if this is like currency, if transactions are involved and you're also dealing with multiple markets, different regulatory agencies, and different regulatory environments, that's another complication. How is consolidating applications and modernizing infrastructure at the same time helping you to manage these compliance and data-protection issues?

Raising the bar

Hébert: The modernization and infrastructure consolidation is, in fact, helping greatly in continuing to secure data and meet ever more difficult security standards, such as PCI and DSS 3.0. Through this process, we're going to raise the bar significantly over data privacy.

Gardner: André, a lot of organizations don't necessarily know how to start. There's so much to do when it comes to apps, data, infrastructure modernization and, in your case, moving to VPC. Do you have any thoughts about how to chunk that out, how to prioritize, or are you making this sort of a big bang approach, where you are going to do it all at once and try to do it as rapidly as possible? Do you have a philosophy about how to go about something so complex?

Hébert: We've actually scheduled the whole project. It’s a three-year journey into the new HPE world. We decided to attack it by region, starting with Canada and the US, North America. Then, we moved on to zooming into Asia-Pacific, and the last phase of the project is to do Europe. We decided to go geographically.
The program is run centrally from Canada, but we have boots on the ground in all of those regions. HPE has taken the lead into the actual technical work. Aimia does the support work, providing documentation, helping with all of the intricacies of our systems and the infrastructure, but it's a co-led project, with HPE doing the heavy lifting.

Gardner: Something about costs comes to mind when you go standard. Sometimes, there are some upfront cost, you have to leapfrog that hurdle, but your long-term operating costs can be significantly lower. What is it about the cost structure? Is it the standardized infrastructure platforms, are you using cheaper hardware, is it open source software, all the above? How do you factor this as a return on investment (ROI) type of an equation?

Hébert: It’s all of the above. Because we're right in the middle of this project, it will allow us to standardize, to evergreen, a lot of our technology that was getting older. A lot of our servers were getting old. So, we're giving the infrastructure a shot in the arm as far as modernization. 

From a VPC point of view, we're going to leverage this internal cloud much more significantly. From a CPU point of view, and from an infrastructure point of view, we're going to have significantly fewer physical servers than what we have today. It's all operated and run by HPE. So, all of the management, all of the ITO work is done by HPE, which means that we can focus on apps, because our secret sauce is in apps, not in infrastructure. Infrastructure is a necessary evil.

Gardner: That brings up another topic, DevOps. When you're developing, modernizing, or having a continuous-development process for your applications, if you have that cloud and infrastructure in place and it’s modern, that can allow you to do more with the development phase. Is that something you've been able to measure at all in terms of the ability to generate or update apps more rapidly?

Hébert: We're just dipping our toe into advanced DevOps, but definitely there are some benefits around that. We're currently focused on trying to get more value from that.

Gardner: When you think about ROI, there are, of course, those direct costs on infrastructure, but there are ancillary benefits in terms of agility, business innovation, and being able to come to market faster with new products and services. Is that something that is a big motivator for you and do you have anything to demonstrate yet in terms of how that could factor?

Relationship 2.0

Hébert: We're very much focused right now on what I would say is Relationship 1.0, but HPE was selected as a partner for their ability to innovate. They also are in a transition phase, as we all know, so while we're focused on getting the heavy lifting done, we're focusing on innovation and focusing on new projects with HPE. We actually call that Relationship 2.0.

Gardner: For others who are looking at similar issues -- consolidation, modernization, reducing costs over time, leveraging cloud models -- any words of advice now that you are into this journey as to how to best go about it or maybe things to avoid?

Hébert: When we first looked at this, we thought that we could do a lot of that consolidation work ourselves. Consolidating 42 data centers into 4 is a big job, and where HPE helps in that regard is that they bring the experience, they bring the teams, and they bring the focus to this. 

We probably could have done it ourselves. It probably would have cost more and it probably would have taken longer. One of the benefits that I also see is that HPE manages thousands and thousands of servers. With their ability to automate all of the server management, they've taken it to a level. As a small company, we couldn’t afford to do all of the automation that they can afford doing on these thousands of servers.
We probably could have done it ourselves. It probably would have cost more and it probably would have taken longer.

Gardner: Before we close out, André, looking to the future -- two, three, four years out -- when you've gone through this process, when you've gotten those modern apps and they are running on virtual private clouds and you can take advantage of cloud models, where do you see this going next? 

Do you have some ideas about mobile applications, about different types of transactional capabilities, maybe getting more into the retail sector? How does this enable you to have even greater growth strategically as a company in a few years?

Hébert: If you start with the cloud, the world is about to see a very different cloud model. If you fast forward five years, there will be mega clouds, and everybody will be leveraging these clouds. Companies that actually purchase servers will be a thing of the past. 

When it comes to mobile, clearly Aimia’s strategy around mobile is very focused. The world is going mobile. Most apps will require mobile support. If you look at analytics, we have a whole other business that focuses on analytics. Clearly, loyalty is all about making all this data make sense, and there's a ton of data out there. We have got a business unit that specializes in big data, in advanced analytics, as it pertains to the consumers, and clearly for us it is a very strategic area that we're investing in significantly.

Gardner: Getting all your i’s dotted and t's crossed in the infrastructure can pay huge dividends for years to come, especially, as you say, when you can focus more on the analytics, on the applications, on your business model, and less on server maintenance.

Hébert: That’s correct.

Gardner: I'm afraid we'll have to leave it there. We've been learning how loyalty management innovator Aimia is modernizing, consolidating, and standardizing its global IT infrastructure. We've heard how improving end-user experiences and using big data analytics helps head off digital disruption and improve core operations.

So please join me in thanking our guest, André Hébert, Senior Vice President of Technology at Aimia in Montreal. Thank you, André.
Hébert: Thank you very much.

Gardner: And I'd like to thank our audience as well for joining us for this Hewlett Packard Enterprise Voice of the Customer podcast. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how improving end user experiences and using big data analytics helps head off digital disruption and improve core operations. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Tuesday, August 23, 2016

How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medicine and Entrepreneurship

Transcript of a discussion on how HudsonAlpha leverages modern IT infrastructure and big data analytics to power research projects as well as pioneering genomic medicine findings.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition to the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT Innovation -- and how it's making an impact on people's lives.

Gardner
Our next IT infrastructure thought leadership case study explores how the HudsonAlpha Institute for Biotechnology engages in digital transformation for genomic research and healthcare paybacks.

We'll now hear how HudsonAlpha leverages modern IT infrastructure and big-data analytics to power a pioneering research project incubator and genomic medicine innovator.

To describe new possibilities for exploiting cutting-edge IT infrastructure and big data analytics for healthcare innovation, we're joined by Dr. Liz Worthey, Director of Software Development and Informatics at the HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. Welcome, Liz.

Dr. Liz Worthey: Thanks for inviting me.

Gardner: It seems to me that genomics research and IT have a lot in common. There's not much daylight between them -- two different types of technology, but highly interdependent. Have I got that right?
Start Your HPE Vertica
Community Edition Trial
Worthey: Absolutely. It used to be that the IT infrastructure was fairly far away from the clinic or the research, but now they're so deeply intertwined that it necessitates many meetings a week between the leadership of both in order to make sure we get it right.

Gardner: And you have background in both. Maybe you can tell us a little bit about that.

Worthey: My background is primarily on the biology side, although I'm Director of Informatics and I've spent about 20 years working in the software-development and informatics side. I'm not IT Director, but I'm pretty IT savvy, because I've had to develop that skill set over the years. My undergraduate degree was in immunology, and since then, my focus has really been on genetics informatics and bioinformatics.

Gardner: Please describe what genetic informatics or genomic informatics is for our audience.

Worthey: Since 2003, when we received the first version of a human reference genome, there's been a large field involved in the task of extracting knowledge that can be used for society and health from genomic data.

Worthey
A [human] genome is 3.2 billion nucleotides in length, and in there, there's a lot of really useful information. There's information about which diseases that individual may be more likely to get and which diseases they will get.

It’s also information about which drugs they should and shouldn't take; information about which types of procedures, surveillance procedures, what colonoscopies they should have. And so, the clinical aspects of genomics are really developing the analytical capabilities to extract that data in real time so that we can use it to help an individual patient.

On top of that, there's also a lot of research. A lot of that is in large-scale studies across hundreds of thousands of individuals to look for signals that are more difficult to extract from a single genome. Genomics, clinical genomics, is all of that together.

Parallel trajectory

Gardner: Where is the societal change potential in terms of what we can do with this information and these technologies?

Worthey: Genomics has existed for maybe 20 years, but the vast majority of that was the first step. Over the last six years, we've taken maybe the second or third step in a journey that’s thousands of steps long.

We're right on the edge. We didn’t used to be able to do this, because we didn't have any data. We didn't have the capability to sequence a genome cheaply enough to sequence lots. We also didn't have the storage capabilities to store that data, even if we could produce it, and we certainly didn't have enough compute to do the analysis, infrastructure-wise. On top of that, we didn’t actually have the analytical know-how or capabilities either. All of that is really coalescing at the same time.

As we are doing genomics, and that technology and the sequencing side has come up, the compute and the computing technologies have come up at the time. They're feeding each other, and genomics is now driving IT to think about things in a very different way.

Gardner: Let's dive into that a little bit. What are the hurdles technologically for getting to where you want to be, and how do you customize that or need to customize that, for your particular requirements?

Worthey: There are a number of hurdles. Certainly, there are simpler hurdles that we have to get past, like storage, storage tied with compression. How do you compress that data to where you can store millions of genomes at a price that's affordable.

A bigger hurdle is the ability to query information at a lot of disparate sites. When we think about genomic medicine, one of the things that we really want do is share data between institutions that are geographically diverse. And the data that we want to share is millions of data points, each of which has hundreds or thousands of annotations or curations.
When we think about genomic medicine, one of the things that we really want do is share data between institutions that are geographically diverse.

Those are fairly complex queries, even when you're doing it in one site, but in order to really change the practice of medicine, we have to be able to do that regionally, nationally, and globally. So, the analytics questions there are large.

We have 3.2 billion data points for each individual. The data is quite broad, but it’s also pretty deep. One of the big problems is that we don’t have all the data that we need to do genomic medicine. There's going to be data mining -- generate the data, form a hypothesis, look at the data, see what you get, come back with a new hypothesis, and so on.

Finally, one of the problems that we have is that a lot of algorithms that you might use only exists in the brains of MDs, other clinical folks, or researchers. There is really a lot of human computer interaction work to be done, so that we can extract that knowledge.

There are lots of problems. Another big problem is that we really want to put this knowledge in the hands of the doctor while they have seven minutes to see the patient. So, it’s also delivery of answers at that point in time, and the ability to query the data by the person who is doing the analysis, which ideally will be an MD.

Cloud technology

Gardner: Interestingly, the emergence of cloud methods and technology over the past five or 10 years would address some of those issues about distributing the data effectively -- and also perhaps getting actionable intelligence to a physician in an actual critical-care environment. How important is cloud to this process and what sort of infrastructure would be optimal for the types of tasks that you have in mind?

Worthey: If you had asked me that question two years ago, on the genomic medicine side, I would have said that cloud isn't really part of the picture. It wasn't part of the picture for anything other than business reasons. There were a lot of questions around privacy and sharing of healthcare information, and hospitals didn’t like the idea.

They're very reluctant to move to the cloud. Over the last two years, that has started to change. Enough of them had to decide to do it, before everybody would view it as something that was permissible.

Cloud is absolutely necessary in many ways, because we have periods where lots of data that has to be computed and analytics has to be run. Then, we have periods where new information is coming off the sequencer. So, it’s that perfect crest and trough.

If you don't have the ability to deal with that sort of fluctuation, if you buy a certain amount of hardware and you only have it available in-house, your pipeline becomes impacted by the crests and then often sits idle for a long time.
Start Your HPE Vertica
Community Edition Trial
But it’s also important to have stuff in-house, because sometimes, you want to do things in a different way. Sometimes, you want to do things in a more secure manner.

It's kind of our poster child for many of the new technologies that are coming out that look at both of those, that allow you to run things in-house and then also allow you to run the same jobs on the same data in the cloud as well. So, it’s key.

Gardner: That brings me to the next question about this concept of genomics as a service or a platform to support genomics as a service. How do you envision that and how might that come about?

Worthey: When we think about the infrastructure to support that, it has to be something flexible and it has to be provided by organizations that are able to move rapidly, because the field is moving really quickly.

It has to be infrastructure that supports this hypothesis-driven research, and it has to be infrastructure that can deal with these huge datasets. Much of the data is ordered, organized, and well-structured, but because it's healthcare, a lot of the information that we use as part of the interpretation phase of genomic medicine is completely unstructured. There needs to be support for extraction of data from silos.

My dream is that the people who provide these technologies will also help us deal with some of these boundaries, the policy boundaries, to sharing data, because that’s what we need to do for this to become routine.

Data and policy

Gardner: We've seen some of that when it comes to other forms of data, perhaps in the financial sector. More and more, we're seeing tokenization, authentication, and encryption, where data can exist for a period of time with a certain policy attached to it, and then something will happen if the data is a result for that policy. Is that what you're referring to?

Worthey: Absolutely. It's really interesting to come to a meeting like HPE Discover because you get to see what everybody else is doing in different fields. Much of the things that people in my field have regarded as very difficult are actually not that hard at all; they happen all the time in other industries.

A lot of this -- the encryption, the encrypted data sharing, the ability to set those access controls in a particular way that only lasts for a certain amount of time for a particular set of users -- seems complex, but it happens all the time in other fields. A big part of this is talking to people who have a lot of experience in a regulated environment. It’s just not this regulated environment and learning the language that they use to talk to the people that set policy there and transferring that to our policy makers and ideally getting them together to talk to one another.

Gardner: Liz, you mentioned the interest layers in getting your requirements to the technology vendors, cloud providers, and network providers. Is that under way? Is that something that's yet to happen? Where is the synergy between the genomic research community and the technology-vendor platform provider community?
This is happening fast. For genomics, there's been a shift in the volume of genomic data that we can produce with some new sequencing technology that's coming.

Worthey: This is happening fast. For genomics, there's been a shift in the volume of genomic data that we can produce with some new sequencing technology that's coming. If you're a provider of hardware or service user solutions to deal with big data, looking at genomics, as the people here are probably going to overtake many of those other industries in terms of the volume and complexity of the data that we have.

The reason that that's really interesting is because then you get invited to come and talk at forums, where there's lots of technology companies and you make them aware of the work that has to be done in the field of medicine, and in genomic research, and then you can start having those discussions.

A lot of the things that those companies are already doing, the use cases, are similar and maybe need some refinement, but a lot of that capability is already there.

Gardner: It's interesting that you’ve become sort of the “New York” of use cases. If you can make it there, you can make it anywhere. In other words, if we can solve this genomic data issue and use the cloud fruitfully to distribute and gather -- and then control and monitor the data as to where it should be under what circumstances -- we can do just about anything.

Correct me if I am wrong, though. We're using data in the genomic sense for population groups. We're winnowing those groups down into particular diseases. How farfetched is it to think about individuals having their own genomic database that would follow them like an authenticated human design? Is that completely out of the bounds? How far would that possibly be?

Technology is there

Worthey: I’ve had my genome sequenced, and it’s accessible. I could pick it up and look at it on the tools that I developed through my phone sitting here on the table. In terms of the ability to do that, a lot of that technology is already here.

The number of people that are being sequenced is increasing rapidly. We're already using genomics to make diagnosis in patients and to understand their drug interactions. So, we are here.

One of the things that we are talking about just now is, at what point in a person’s life should you sequence their genome. I and a number of other people in the field believe that that is earlier, rather than later, before they get sick. Then, we have that information to use when they get those first symptoms. You are not waiting until they're really ill before you do that.

I can’t imagine a future where that's not what's going to happen, and I don’t think that future is too far away. We're going to see it in our lifetimes, and our children are definitely going to see it in theirs.
The data that we already have, clinical information, is really for that one person, but your genome is shared among your family, even distant relatives that you’ve never met.

Gardner: The inhibitors, though, would be more of an ethical nature, not a technological nature.

Worthey: And policy, and society; the society impact of this is huge.

The data that we already have, clinical information, is really for that one person, but your genome is shared among your family, even distant relatives that you’ve never met. So, when we think about this, there are many very hard ethical questions that we have to think about. There are lots of experts that are working on that, but we can’t let that get in the way of progress. We have to do it. We just have to make sure we do it right.

Gardner: To come back down a little bit toward the technology side of things, seeing as so much progress has been made and that there is the tight relationship between information technology and some of the fantastic things that can happen with the proper knowledge around genomic information, can you describe the infrastructure you have in place? What’s working? What do you use for big-data infrastructure, and cloud or hybrid cloud as well?

Worthey: I'm not on the IT side, but I can tell you about the other side and I can talk a little bit on the IT side as well. In terms of the technologies that we use to store all of that varying information, we're currently using Hadoop and Mongo DB. We finished our proof of concept with HPE, looking at their Vertica solution.

We have to work out what the next steps might be for our proof of concept. Certainly, we’re very interested in looking at the solutions that they have in here. They fit with our needs. The issue that’s been addressed on that side is lots of variants, complex queries, that you need to answer really fast.

On the other side, one of the technological hurdles that we have to meet is the unstructured data. We have electronic health record (EHR) information that’s coming in. We want to hook up to those EHRs and we want to use systems to process that data to make it organized, so that we can use it for the interpretation part.

In-house solution

We developed in-house solutions that we're using right now that allow humans to come in and look at that data and select the terms from it. So, you’d select disease terms. And then, we have in-house solutions to map them to the genomic side. We're looking at things like HPE’s IDOL as a proof-of-concept (POC) on that side. We're talking to some EHR companies about how to hook up the EHR to those solutions to our software to make it a seamless product and that would give us all that.

In terms of hardware, we do have HPE hardware in-house. I think we have 12 petabytes of their storage. We also have data direct network hardware, a general parallel file system solution. We even have things down to graphics processors for some of the analysis that we do. We’ve a large deck of such GPUs because in some cases it’s much faster for some other types of problems that we have to solve. So we are pretty IT-rich, a lot of heavy investment on the IT side.

Gardner: And cloud -- any preference to the topology that works for you architecturally for cloud, or is that still something you are toying with?
We not only do the research and the clinical, but we also have a lab that produces lots of data for other customers, a lab that produces genomic data as a service.

Worthey: We're currently looking at three different solutions that are all cloud solutions. We not only do the research and the clinical, but we also have a lab that produces lots of data for other customers, a lab that produces genomic data as a service.

They have a challenge of getting that amount of data returned to customers in a timely fashion. So, there are solutions that we're looking at there. There are also, as we talked at the start, solutions to help us with that in-flow of the data coming off the sequencers and the compute -- and so we're looking at a number of different solutions that are cloud-based to solve some of those challenges.

Gardner: Before we close, we’ve talked about healthcare and population impacts, but I should think there's also a commercial aspect to this. That kind of information will lend itself to entrepreneurial activities, products and services, a great demand in the marketplace? Is that something you're involved with as well, and wouldn’t that help foot the bill for some of these many costly IT infrastructure investments?

Worthey: One of the ways that HudsonAlpha Institute was set up was just that model. We have a research, not-for-profit side, but we also have a number of affiliate companies that are for-profit, where intellectual property and ideas can go across to that site and be used to generate revenue that fund the research and keep us moving and be on the cutting-edge.

We do have a services lab that does genomic sequencing in analytics. You can order that from them. We also service a lot of people who have government contracts for this type of work. And then, we have an entity called Envision Genomics. For disclosure, I'm one of founders of that entity. It’s focused on empowering people to do genomic medicine and working with lots of different solution providers to get genomic medicine being done everywhere it’s applicable.

Gardner: Well, it's been a fascinating discussion. Thank you for sharing that, and I look forward to tracking the close relationship between IT and genomics, because as you say, they are self-supporting, reinforcing, and both very powerful in their impacts on society.
Start Your HPE Vertica
Community Edition Trial
We've been learning how the HudsonAlpha Institute for Biotechnology engages in digital transformation for genomic research and for healthcare paybacks. We’ve heard how HudsonAlpha leverages modern IT and cloud infrastructures and big data analytics to power research projects as well as pioneering medicine uses and companies.

So please join me in thanking our guest, Dr. Liz Worthey, Director of Software Development and Informatics at the HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. Thank you so much, Liz.

Worthey: Thank you very much. Thanks for inviting me.

Gardner: And I will also thank our audience as well for joining us for this Hewlett Packard Enterprise Voice of the Customer Podcast. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how HudsonAlpha leverages modern IT infrastructure and big data analytics to power research projects as well as pioneering genomic medicine findings. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in: