Editor’s note: This is part two of a two-part interview with David Heaney of Massachusetts General Hospital Brigham about AI and cybersecurity. To read part one, click here.
In the first of this in-depth interview, David Heaney, Chief Information Security Officer at Massachusetts General Hospital Brigham Hospital, discussed the defensive and offensive use of artificial intelligence in healthcare. He said that understanding the environment, knowing where controls are deployed, and being familiar with the basics will be crucial when AI is involved.
Today, Heaney will discuss best practices that healthcare CISOs and CIOs can adopt to secure the use of AI, how his team uses them, how to keep his team up to date on the use of AI and security against AI, the human element of AI and cybersecurity, and the types of AI used to combat cyberattacks.
Q. What are some best practices that CISOs and CIOs in the healthcare industry can adopt to secure the use of AI, and how are you and your team leveraging them at Mass General Brigham?
A. It’s important to start by phrasing this question: understanding that these AI capabilities will bring incredible change to many things in the industry, including how we care for patients and discover new approaches.
The key is how do we support it and how do we protect it? As I said in part 1, it’s really important to get the basics right. So if we have AI-driven services that use our data or run in our environment, the same requirements apply for risk assessments, business associate agreements, and other legal contracts with non-AI services.
Because at one level you’re talking about another app and it needs to be controlled like any other app in your environment, including limiting the use of unapproved applications. That said, that’s not without AI-specific considerations. A few that come to mind. There are certainly additional considerations around data usage in addition to the standard legal agreements that I mentioned earlier.
For example, do you want your organization’s data to be used for downstream training of a vendor’s AI model? The security of the AI ​​model itself is important. Organizations should consider options around continuous validation of the model to ensure accurate output in all scenarios. This becomes part of the AI ​​governance discussed in part 1.
And there’s also adversarial testing of models: if you give it the wrong input, does it change how the output is? And what really changes a little bit in terms of importance in this environment is how easy it is to deploy many of these tools.
For example: There are note-taking services like Otter AI and Read AI, as well as many others, but these services are incentivized to make adoption simple and smooth, and they’ve done a great job at that.
The concerns around the use of these services, what data they have access to, etc. remain, but the ease of end-user adoption, combined with, frankly, how great this and other applications are, makes it important to focus on how you onboard different applications, especially AI-driven applications.
Q. How have you been strengthening your team around security with AI and security for AI? What’s the human element here?
A. That’s a big thing. And one of the most important values ​​for my security team is curiosity. I would say it’s the single skill behind everything we do in cyber security. Curiosity is when you see something that’s a little bit off and you ask, “Why did that happen?” and you start investigating.
This is where almost every improvement that we make in this industry begins, so a big part of the answer is having inquisitive team members who get excited about this and want to learn for themselves, and then they go out and really try out some of these tools.
I try to lead by example in this field by sharing how different tools can make your job easier. But there is no substitute for curiosity. In the digital team at MGB, we try to dedicate one day per month to learning and provide access to a range of training services with relevant content in this field. But the challenge with that is that technology changes so fast that training can’t keep up.
There’s nothing better than being outside and playing with technology. But ironically, one of my favorite The purpose of generative AI is to learn, and one of the things I do is use prompts like, “Create a table of contents for a book titled X, where X is the topic you want to study,” and I also provide a bit of information about who the author is and what the purpose of the book is.
This creates a great overview of how to learn about that topic. You can then ask your AI friend “Hey, can you explain chapter 1 in detail? What does that mean?” or go to other sources or other forums to find related content.
Q. Without giving away any secrets, what types of AI are you using to combat cyber attacks? Can you explain in broader terms how these types of AI work and why you like them?
A. MGB’s overall digital strategy is focused on leveraging technology vendors’ platforms, and picking up a bit from the vendor question in Part 1, our focus is to work with these companies to develop the most valuable capabilities, many of which will be AI-driven.
So as not to spoil the cash cow, so to speak, here’s what it looks like, at least in a general sense: our endpoint protection tools use a variety of AI algorithms to identify potentially malicious behavior, then the logs from all those endpoints are sent to a central collection point where we combine rule-based and AI-based analytics to look for broader trends.
So, are there any trends that indicate any elevated risk, not just on one system, but across the entire environment? We have an Identity Governance Suite, which is the tool that we use to provision access to grant and remove access within the environment. This suite of tools has a number of built-in capabilities to identify potential risks and look at combinations of access that may already be in place, or look at access requests as they come in and not allow access in the first place.
So that’s the platform itself and the world of technology that’s built into it. But going beyond that, going back to how we can use generative AI in some of these areas, we use it to speed up all kinds of tasks that we used to do manually.
The team has saved a huge amount of time that can’t be quantified. Generative AI is used to create custom scripts for triage, forensics, and system remediation. It’s not perfect – the AI ​​gets you about 80% done, but then the analyst finalizes the script and does the work much faster than running or writing it from scratch.
Similarly, we use some of these AI tools to write queries that feed into other tools, and providing access to these tools helps junior analysts to use a range of our other technologies more effectively and helps them gain skills more quickly.
Our senior analysts are just as efficient as they are. They already know how to do a lot of things, but starting from 80% is always better than starting from scratch.
In general, I call this the Overly Enthusiastic Intern: you can ask this tool for anything and it will give you answers that range from a very good starting point to maybe a great complete answer, but you will never use that answer unless you check and complete it yourself.
To see the video of this interview, which includes bonus content not included in this story, click here.
Editor’s note: This is the tenth and final installment in our feature series featuring healthcare IT industry leaders on the use of artificial intelligence. Check out our other articles:
Follow Bill’s HIT articles on LinkedIn: Bill Siwicki
Email: bsiwicki@himss.org
Healthcare IT News is a publication of HIMSS Media.
#Generative #CISOs #enthusiastic #intern