Stan and Clarence chat with Dr. Jigar Patel about the growing use of artificial intelligence (AI) in healthcare.Dr. Patel serves as the Senior Director in Product Management for Healthcare at Orcal. Throughout this time, they have developed a deep understanding of the technical needs of healthcare, including AI based products. Dr. Patel is especially passionate about using electronic health records (EHR) and associated technologies to benchmark and improve outcomes across all medical specialties and venues of care.Listen along as Dr. Patel shares how AI is shaping modern healthcare.Join the conversation at healthchatterpodcast.comBrought to you in support of Hue-MAN, who is Creating Healthy Communities through Innovative Partnerships.More about their work can be found at http://huemanpartnership.org/
Hello, everybody. Welcome to Health Chatter. Today's episode is on artificial intelligence, which is becoming quite a complicated issue in a variety of venues, but certainly has strong implications in the healthcare arena. We have a great guest with us. We'll get to that in just a second.
I want to, first of all, thank our illustrious crew, Maddie Levine-Wolfe, Aaron Collins, Deandra Howard, and Sheridan Nygaard, who do wonderful behind-the-scenes work for us, providing Clarence and I with some good background research and ideas to talk about on all our shows. So thank you. Thank you a lot to you guys. Sheridan also helps us with our marketing.
And then Matthew Campbell is our production manager. Make sure that these shows get out to you, the listening audience, in a crisp, clear way. So thank you to everybody. Then, of course, there's Clarence, where we do this hand-in-hand. And we've realized that, boy, we know a lot of people in the healthcare arena. So we've had a lot of guests on our shows.
And it's been a wonderful, wonderful experience. So thank you, Clarence. You're a good voice to the healthcare arena and thanks for being with us. Then in addition, Human Partnership is our sponsor for these shows, great community engagement group in Minnesota. And they are involved in a lot of great health related issues at the community level.
We thank them dearly for being our sponsor for these shows. You can check them out at humanpartnership.org. So with that, we're going to get into artificial intelligence. And we've got a great guest with us today who actually came to my attention through a colleague of ours, Arkel Georgiou, and connections through Oracle.
And apparently, Dr. Patel's team owns an Oracle strategy, and he can talk about that, an AI strategy, a little bit. But I'll let him introduce himself, and then we'll get going. Dr. Patel.
Yes, thanks, Stan. Thanks, Clarence. Thanks, team, for having me. My name's Dr. Jigar Patel, as Stan said. I've been at Cerner, now Oracle, 16 plus years. And I started as a pathologist. Transfusion medicine was my subspecialty in clinical practice. I was at the University of Kansas Medical Center before joining Cerner. Kind of lucky happenstance in my career. I was in Kansas City.
Cerner was based in Kansas City. I have an engineering undergraduate, so I was kind of, and as a pathologist, always involved in informatics. In most of my time at Cerner, I was on the client side. I did sales, implementations, the whole nine yards of client interactions, led various groups from a chief medical officer responsibility
And then eight years ago, I finally dove into product management and joined a team that is composed almost entirely, which is composed entirely, except for myself, of legacy Oracle people. So I was a stranger in a strange land. I was the only clinician in the group. And I was the only legacy Cerner guy in the group.
So that had a bunch of different implications, but then got to dive in and understand artificial intelligence and our strategies going forward from a cloud delivery perspective, and then how we're going to bring it to healthcare specifically. So very excited. I talk about this topic all day, every day. So I love talking about it and happy to be here and talking to you folks.
Thank you. Really greatly appreciate you being on Health Chatter with us today. So let's kick this off by first starting out. There's been a lot of chatter about artificial intelligence. And frankly, I don't think many people even know what it is or the logistics behind behind it overall. So let's start there. So we all have kind of a common denominator as we talk about this.
What exactly is artificial intelligence?
Yeah, artificial intelligence is a machine mimicking human capabilities. And so that one simple example of how to think about that is, how do I understand someone you stand talking to me? That's speech AI. How do I take a conversation or voice and turn it into some digital text? So that's one example. How do I understand? Once I've got that digital text, then I can apply a language service.
How do I understand that text in some way? Other kind of what I would refer to as pure AI services are things like vision. How do I take an image or a video or something and perceive it? How does a computer machine perceive it in a way that's useful and interprets like a human would?
So is it useful at the individual level or is it more useful or as useful at the professional level?
It's both, I think. There are lots of people that are starting to organize their lives. And the services, the AI services I refer to are kind of, like I said, pure. The real power is when you start to aggregate them together. If you take speech and language and generative AI and And I say, okay, I understand what you're saying to me. I interpret that thing.
And then I create content from it using a generative service. That's when you start to get real power. Another simpler example is document understanding. Using a vision service, I understand what's on a page. I take the text off it, turn it into language, and then interpret and codify in a way that can be reused. Now, Individually, I was thinking about this this morning.
I was thinking, hey, why am I not using AI services in my own organization of myself and augmenting me and automating some of me in a way that's useful? And so I have to sort through that from a personal philosophical level first.
But then on the professional side, you can think of it as a, it's useful to a professional, but then it can be useful to larger and larger groups as you think about the density of data and the amount of data. And how do you look at those things and understand those things in a way? We've talked about this in analytics a lot. How do I roll up dashboards and those sorts of things?
The same concepts apply to AI. So I think to your question, Stan, it can apply at very human individual level all the way through to massive organizations and how do they organize operations and other aspects of their business as well. All right, Clarence.
Yeah, Dr. Patel, thank you for that. You know, Stan, you started off the conversation. Many people, when they think about AI, they think about Westworld or they think about some other kind of movie that they've seen. It's not necessarily a positive one because, you know, the AI robot, whatever, does something really interesting. So my question to you is this.
How do we explain this in a way into the community where they see the real value of AI? We're going to use it, okay? But there's always that underlying fear that this is something that's going to take over. So how do you address that?
Yeah, the joke I have is Skynet is here and it's coming. So watch out for your Terminators. They're around the corner. I have a friend who will, when she talks to her Alexa, she says, please and thank you. Because when the AI overlords come, she wants to be thought of in a good way. So it's those kinds of things. She wants to be polite. She wants to be polite now so they like her later.
So it's definitely there. There's a lot to think about from a safety perspective. putting guardrails. I think some of what you've heard from the Elon Musks of the world and others is that government needs to step in and organize this and keep it in the box, so to speak, because unfettered humans will be humans and push the limits on all of it.
In healthcare specifically, there's real jeopardy when you think about hallucinations, bias, and other things that can be introduced, not necessarily intentionally, but unintentionally, right? That could lead to misdiagnoses, could lead to the wrong treatment plan. So I, in particular, when I talk to clinicians about this, I say, let's think about the automation first.
How do I make things useful to you today that you validate? and no, right, creating a note out of other text that may be in the chart is an example. The clinician is still responsible for the validity of that information. as soon as I start making recommendations to treatment, that's where we get into a funnier territory, right?
That's when you start to get into jeopardy, when you think about safety, where you think about guidance. FDA does this for medical devices, right? And so it will be, it feels like an inevitability that government, FDA in the healthcare space will have some oversight like they do for medical devices or biologics, et cetera, in making sure they're valid and useful.
Now, on the counter to that, 70% of what doctors do, this is a colleague of mine that I've known for a long time. He was the chief medical information officer of a very large health system in America. He's like, 70% of what we do in primary care is known. Let the computer help automate that so that I can use my physician brain on the other harder 30%.
So what's the advantage to the clinician there is that then that clinician can unburden themselves of that stuff that they don't necessarily need to recall, makes their day easier, but then really apply their intellect and that knowledge and that years and years and years of training to those other things. So there's going to be a totally roundabout conversation. I apologize.
But when you think about the evolution from the end point being safety and those sorts of things, it's going to be a gradual evolution to that problem. And we've got to be very cognizant along the way of how we're doing that and make sure the human stays in the loop.
and their intellect actually applies i i'm most fearful of the human just wiping their hands of it and letting people just letting the ai just do things for them and there are some that do that already and so that that makes me fearful for us but let me let me ask a couple you know it's kind of like i'm thinking i'm trying to put my my head around the idea if someone knows nothing about
artificial intelligence, and then all of a sudden, we're kind of thrust into this thematic chaos of it all and complexity of it all. So let's start out with an individual from a healthcare perspective, all right? So let's just say you're diagnosed, an individual is diagnosed with a particular chronic disease. How is it in that moment
that they might be able to utilize artificial intelligence to help them.
Yep. One of the first ways is artificial intelligence and generative AI in particular is good at doing summarizations. It's taking a lot of different sources and pushing together in a way that provides a breadth and depth of information that a static source might not have.
Now, the other thing that's interesting about generative AI is that you can also use it in chat GPT in particular is very good at this. In that it can actually tune up or tune down the literacy level of the output. So, giving me somebody that's graduate school educated, yourself as well, and saying, okay, normal patient education comes at a fifth to eighth grade reading level. Okay.
For me, that's like, I don't even bother looking at it, right? Because I know way more than that. But... If I do that, can I tune it up and give me more information at the level I'm going to understand is really quite powerful. And similarly, on the downside of that, and one of the things doctors do well or badly is explaining to patients, right?
And if you can tune it down to their level, because I often, one of my colleagues here calls me the chief explaining officer because I do a lot of explanations around this stuff. And I do it, but I break it down so that I don't try to be condescending, obviously, but I want to make sure people understand the basic building blocks of the concepts we're talking about.
And that's hard for physicians. But we're trained in the fancy words we've learned, right? But when you're talking to a patient, you start throwing fancy words, they don't get it, right? You lose them quickly. And the good clinicians then figure out a way to make it simpler for those that, and getting to the level they're at. AI can do that on the fly, right?
in a way that's unique and fast that we can't imagine doing and we can do more uniformly. Similarly, it also, people, there's been studies and other things, people actually think the responses out of AI are more empathetic than their providers. So it gives you literacy, it gives you empathy, it gives you different things that you're like, wow, didn't think about that at all.
Yeah. So it's... If I'm hearing you right, it's almost like an easy access tool in order to communicate at higher levels or at lower levels if you need to, depending upon who you're interacting with.
It can, absolutely. So it's very powerful that way. And there's ways to have it do things that take work from a human mind perspective. It can do instantaneously.
So, you know, one of the things, and I appreciate this, one of the things, and again, you know, I read some things, and as a community member, I recently was looking at an article, and you use the term hallucination, okay? There was recently an article that showed some computer-generated bodies. And one of the concerns was that, you know, using...
AI, we were going to create a false perception for younger women about their bodies. And we're already working with body shaming. We're already talking about health and those kinds of things. What kind of, as a clinician,
What kind of conversations would you have with people about the use of AI and utilizing it for those types of things and not allowing it to create a false hallucination for you about how life really is?
Yeah, I mean, we're already, even without AI, we're already there, right? Social media, which is, there's a fair amount of AI in social media algorithms, machine learning sort of thing. We're already there. And so it behooves the public to understand the technology and how it can manipulate you without you realizing it, right? And there are companies that are using it to sell you things.
There are companies that are using it to grab your attention to things. It's the proverbial rabbit hole. AI can only make the rabbit hole worse. And if you're in a certain mindset, going and following the rabbit down the hole becomes easier and easier. We're already in a place where that's easy, right? It's going to get worse. So we have to educate people on the implications of this. I'm
And this goes to training of medical students, right, is a core example. I'm worried about giving AI to medical students because they don't learn to think like doctors, which is really, really important, first and foremost, as an example. We have a technology where we can record a doctor-patient conversation and then create a note for the doctor. I'm the first.
I told my product management team, do not put this in the hands of medical students. because you're synthesizing something that their brain needs to hardwire before they become physicians. So it becomes problematic that way too, in that we can, the hard work of becoming a doctor or the hard work of being an expert in anything can go away.
And that's, that's detrimental to us as a society and as a human individual. Sure. So let me, let me,
kind of follow up. There are a couple of themes that you brought out here. Training and education. Now, let me separate that a little bit. Let's talk about, you alluded to it, training for professionals. In this case, let's call it healthcare professionals. How is it that we do that? How is it that we really get the existing healthcare professionals up to speed?
How is it that we get newly trained healthcare professionals? And I don't care whether they're physicians, whether they're public health professionals, whether they're allied health professionals, all of us, how is it that we get them
trained in their schooling or um how is it that we get them integrated yeah if they haven't used it at all so yeah i i'm sure you kind of touch on this yeah it's a it's a hard problem right let's start with the the seasoned people yeah work our way backwards yeah okay um in you know
being an EMR electronic health record guy for most of my career, we had this problem with EMRs in understanding the technology. When I started, the joke was, there are some docs you're going to have to train to use a mouse. Exactly. That still exists. Right. Exactly. Not as much now, but 15 years ago? Absolutely. We had to worry about that. Yeah.
So it's going to be, I think it behooves every professional organization that's out there certifying their physicians on continuing medical education to have informatics and AI type conversations and training for those professionals. So it impacts them specifically from a trusted source that's understanding what they are.
It's more than just understanding the evolution of disease and the new tests and the new medications and those things. It's got to be this also. So on that front end, that has to be it. It has to be colleagues that are knowledgeable, CMIOs and others like myself, also being talking to folks like that's on health chatter to give them the viewpoint because I'm steeped in it every day.
So it's got to be multimodal in its approach. It's got to be, we got to blanket that across the board. Now, as we move down the spectrum from seasoned to younger individuals who have their MD, there's an advantage to helping them with things with AI, but they have to know AI is helping them. In a way that is different.
So the thing I tell people from a design perspective, which is really hard, is how do I let you know the validity of the thing I'm suggesting to you? Or how do I let you know this was created by AI? How do I give you indicators so you realize this is not another human, this is not something that was already there.
It was something that was created out of thin air, so to speak, and that's not entirely true. The knowledge and the exposure from a usability user experience perspective has to be there kind of in the workflow as well. Then as you think back into medical education, we still haven't cracked the nut on basic informatics education from a medical education perspective.
I, as a pathologist, it's part of pathology. Pathology was the first set of professionals along with radiologists that were using computers because of volume, because of those technologies. We had to be there. So I was taught that in my training. And I actually, in training residents in pathology, that was my job as well. I was the informatics guy. I was teaching them about informatics.
So we have to get back all the way into medical school and say, okay, here's your informatics course. Because Medicine is information, right? Treatment of patients is information. It has to be more than information. It has to go from data to information to knowledge in a way that's clear and open and clear cut to that person that's learning it. And they got to have the underpinning.
Now, going back. back even further into, you know, before college, before college and then into high school and before. It needs to be baked in there too, right? We do have this thing in America where people don't like STEM and STEAM, right? STEM is core to this. You have to have a basic understanding of that way back when. So we have to push it all the way back to the very early years.
And I was in an airport last night and invariably you're walking through the airport and people are stuck on their phones and they're consumed by it. But they also, many of them are quite sophisticated and understand the technology. Many don't. And so it goes all the way back to that. So it's a societal problem. It's not just professional. It's not just educational.
It's the whole thing that we need to keep front and center.
You know, it's interesting, you know, my wife and I have, you know, we've lost both of our parents. But, you know, I catch myself from time to time saying, oh, my God, if my mom or dad were alive today, this idea of simply streaming a television show would be, frankly, like a foreign language at their age. And so now you think about all these things. At one point, think about it.
Even for us, just a computer, just a mere computer. But this is coming at us very, very quickly. Clarence.
Yeah, I think that this is a great, great opportunity for us. I want to go back to the question about technology.
concerned about ai okay and how do we help people to understand it's going to happen okay so it is happening it's here it's here it's here people just don't see it how do we um how do we help people to understand the importance of it and also um how they can utilize it more effectively for themselves
Yeah. It's got to be a societal goal to inform more on it, holistically, I think. And that is a, it gets back to the, got to educate on STEM, right? And understanding the technology and not just taking it at face value. With that knowledge and that loss, comes a blindness to what it's doing to you individually.
And when we start to accept the inputs without any questions, that's when we may have lost. Right. And lost is probably a strong word here, but it is something that we have to be very, very cognizant of. I mean, there's this I read an article that said at some point, 50 percent or more of the Internet may have been generated by AI. And it's not even a human query. Right. Right.
And so the information we get is AI generated. And that is.
scares the bejesus out of me frankly um in that it what becomes truth then what it's some human behind the scenes manipulating potentially in an adverse way the truth right that's out there um and it and it becomes a non-non-concept and it gets back to i saw it here and that's the truth well yeah yeah so let me let me ask this yeah one of the other themes that you
alluded to Jigar was the idea of empathy and sympathy that Help me to figure out, I don't know how a machine can do that, okay? I don't know how artificial intelligence can do that, but as human beings, we can do that, okay? So do we use artificial intelligence then to help us as clinicians, as public health people to be more empathetic, to be more sympathetic?
We use that as a professional tool, right? to do that or what?
Yeah. The concept around a large language model is it, depending on how you've trained, what corpora of text you've loaded into it.
Yeah.
That corpora of text understands the relationship of words to one another. And when you say to it, that you want it to act more like, say, a specific author or a specific somebody that does a good job of conveying empathy through words, then it can take on that characteristic.
So it's all about the language and the use of language and the right language and how those relate to one another that it can do better than a human. Because it has this billions, trillions of words and the relationship of those words to one another and examples of different things and the probabilities of those things, right? So we can understand how...
the basic concept of language can be more empathetic or less. So it goes back to the language, right? What language expresses empathy? What language expresses sympathy?
Regardless of what language it is, right? Correct. It could be a foreign language and it, AI, can connect to the empathetic words for that different language?
And it's not just the words. It's the construct of the words in relation to one another.
Got it.
Got it. Right. From a large language perspective, large language model perspective, the underpinning of gendered and AI, the vast majority are in English right now. So we have a translation problem that has to get solved over time. The internet is the default language of the internet is English. And there's a lot of go to your Google Translate and it translates into any language. Right.
So there is a loss of that in translation, but AI will catch up there as well.
Yeah.
So it's really saying the right words in the right order at the right time that conveys that empathy and sympathy in a way that's unique and different. It can be programmed, frankly.
Yeah, yeah, yeah. Yes. Clarence? Yeah, let me ask this question. I have only heard stories about this. What is this chat GPT? What is it?
Yeah, so a chat GPT, the foundation is a company called OpenAI. OpenAI has loaded huge corpora of text, internet-based text. The biggest sources being Wikipedia, GitHub, et cetera. So taking the world's knowledge, and basically understanding the relationship of the words.
It's a large language model in that now it can predict based on that corpora of text, the next word given any of the words before it. So that's the underpinning of this. Now you put a transformer on top of it and a chat interface to interpret the input and then predict or create out of that understanding a response to the input, the chat, right? So that is the concept of a chat.
We're used to a search, right? And a chat is the next evolution of that in some ways, right? We're used to it now on the internet, right? When we go to customer service, the first thing you hit is the chat bot, right? Same thing. It's taking the input from a type or words perspective, understanding it, and then turning it back into something useful for you.
That evolution has gotten to ChatGPT and that thing and its capability to do things well beyond that simple interaction.
um so it's really um it's probabilistic it's it's understanding the likelihood of these things to one another and then programming to accomplish um the end points um that open ai is one large language model google has a number of them that because of the text it's loaded has different probabilities
And then there are other companies that have other open sources and other things they've loaded into various large language models that interact differently because of those different probabilities in these different corpora of text. As an example, if you loaded the National Libraries of Medicine content into something else, it's going to be very different than it is looking at Wikipedia, right?
It's not going to know about Napoleon. or things around Napoleon or the context of Napoleon or those sorts of things. But it will know about gallbladder disease and other things in a more complete way than, say, a general purpose large language model. So it's also going to depend on those things. Interesting. Meta, Facebook, has its own large language model based on Facebook. Right.
So it's using these very different corpora of text to create and then layer on top of other technologies to transform those things to take an input and provide an output.
So let me ask you, I'm trying to circle us back into the health arena here in just a second. But one of the things that kind of.
maybe disturbs me on the front end a little bit is are we compromising human intellect so let me let me give you a for instance it's like if i wanted to give a a speech okay on you know whatever okay to whoever theoretically i could you know i could you know look it up and and have it
created for me you know and you know i might change you know a thought to an ah and off i go all right so are we what's your sense i mean you've been in the field so do you think we're compromising human intellect or are what is it that you could easily say we're complementing it right um that would be the the hopeful, but on the other hand, are we? Are we truly compromising our intellect?
I don't know if compromising is the right word. Are we making it easier to be perceived as intellectual? OK, absolutely. Yeah, because, you know, as people trained in medicine, we took a lot of time in our careers to learn to synthesize. Absolutely. Made the the act of synthesis almost. It's very it doesn't have to be complicated. It can be a simple input and out spits this.
500-word essay on the thing you want or the speech or whatever. The storytelling that goes with that is a synthesis act. It's correlating personal experiences and things that you think might be relevant to the topic that drive a compelling speaker. But you can shortcut it. You absolutely can with these things.
And you can make someone who's very uninformed isn't the right word, but who's put no work effort into it. And then they can regurgitate. Now, is that person going to be on stage? Somebody that's going to be as compelling as somebody that synthesized it and can tell it? They're just reading cue cards at that point. It won't be as compelling.
People will not necessarily be drawn to that because there is that human element. Now, are there other examples of people creating avatars that are as compelling? Potentially, yes. So it can be very short-cutting to, it's almost the human existence, right? Yeah. And the knowledge and the thoughtfulness of our race that has taken millennia to create, could it be? Yeah, it is a real, it's a fear.
Yeah, yeah, yeah. All right, so let's circle back. Clarence and I have been involved in the healthcare arena for a long, long time. Based on your experience, are there particular, let's just take chronic disease arenas. You know, from your perspective, do you think that there are particular chronic disease arenas that can really utilize AI, I guess, much more than perhaps others?
Or are we all on the same playing field right now with no matter what condition?
Yeah, I think there's two aspects to that. One is the longevity or how long something has been around. And then secondarily, the new knowledge sources that go to inform those things. So if we take someone that's had a chronic condition for 30 years, the summarization of that course over 30 years would take an hour of digging through a chart to figure out and piece together that thing.
AI can do that in a way instantaneously almost, right? That a human cannot. And there are things it can figure out that a human may have missed because it took them an hour as opposed to it's generated a page for me to read and consume and there are correlations in it that may become more clear in that process Now, it can also then say there are correlations here that were missed, right?
In a way that's different. So it's time-saving. Well, it's time-saving, but it's also what's the length of the time the chronic condition has been around? Correct. Now, as a pathologist, even in my time since my training, our knowledge of the genetics, the markers, and other things around cancer specifically and other pathological conditions that are new.
And we can use AI to look back on those things and make correlations as well. Now, when you think about how do you then incorporate a whole genome to the condition that is a chronic condition as well, it doesn't have to be linear or this snippet means that thing. It could be this constellation of snippets means this thing. AI and machine learning in general is good at finding patterns.
And so those patterns that may have eluded a human, in this large volume of information about this individual can be made easier.
So it's a function of, like you said, if there's more history for the actual disease itself, or there's more history of a patient having a particular condition over an extended period of time, AI can be an incredibly useful tool to synthesize the information quickly and efficiently. Again, Time-saving, for sure.
It could also go beyond that. It could go generation. Oh, generational. Yeah, yeah, yeah.
So you think about family history also. Right, right, right.
Now, we're going to be in an era where many of our records have been digitized. Now our kids' records are digitized. Right. And you correlate those things together.
Yeah.
That a human might not do, but an artificial intelligence could do.
yeah so let me let me you know we've all lived and frankly are still living with um covet okay all right so take covet as a public health issue all right um Tell me how perhaps AI could have been more of a useful tool for us if we had maybe utilized it more or engaged with it more in order to respond, in this case, to a public health emergency.
Yeah, I mean... There's been studies that have shown Google searches predict epidemics or seasonality of flu or those sorts of things. So looking at various data streams and correlating them together in a way that is forward looking to say, is this an anomalous behavior to the normal state? Right.
And correlating more varied constellation of symptoms and grouping of symptoms that says, wait, this might be unique. That can be done more readily. Now, public health infrastructure in general, I think people would largely agree, needs an uplift. It's behind many other industries in how we think about data acquisition, data sharing.
There's state and local and federal restrictions and all those problems that come with the data that you want to have that we have to battle past. But absolutely, the capability of AI to look at large data streams and say, wait a minute, where are the patterns in here? Could help.
So it could be for predictability.
Could help. And then that could lead to time savings in an action perspective.
Correct. Got it. Got it. Yeah.
Yeah. On the drug discovery side, there's already been some frightening examples of drug discovery being done through AI, right? And creating compounds that are novel and have different properties that could be potentially brought to bear sooner. So it doesn't take a human chemist to really sort through those things. Yeah. Understand those things.
So there's implications from a public health treatment perspective also on that.
And things going forward. Yeah, Clarence.
So Dr. Patel, how quickly is AI going to grow? I mean, you know, I mean, it's quickly, it seems like it's been growing at a pretty quick rate. And so even for those people who are resistant to AI, it appears that it's going to overtake us pretty soon. How rapidly is this growing on a yearly basis? What are some of your projections for the year 2020, 25, 2030?
Yeah, I don't have any numbers for it, Clarence, I'll be honest with you, but- Stack AI. They don't give me the answer. I got to open that up.
We're going to have to call chat up and see what they're going to say.
I mean, it- it probably correlates to Moore's law, which is computing power will increase, will double every two years or every year or something. I don't remember the exact, but the capability of computing to accelerate and get faster and more performant, AI is going to have a similar kind of hockey stick to that Moore's law in that the computing is going to push it in that direction. Now,
I think if we look back, the awareness of it is we're at a hockey stick moment, right? And that was with ChatGPT, I think 3.54, and really the acknowledgement that this thing does amazing things. More so than, say, Watson in the past or others in the past, right? So the awareness is at an inflection point, and I think the awareness will continue to be a growth that's very, very fast.
Now, the use, I think, will also accelerate. I think big companies such as my own and others are, how do I put it, everywhere, right? It is the surfacing to humans is going to get accelerated. And I think the goal from embedding it in computer systems and human interactions is that we can potentially, as human race, we've always talked about how do we be more productive?
And that productivity continues to increase, and AI can only help there as well. Now, that has implications around people losing jobs and all that, but that's happened throughout history with various technologies. And computers and AI is just the latest example as well. We forget, not necessarily forget, but we don't have to do certain things because we have this new technology. Car is an example.
We don't have to walk now, right? We can bike. Those things have accelerated our productivity and getting from one place to another, making the world smaller as an example of how this could change us going forward as well. So the timescales of things are going to be very different. I don't have good statistics to support any of that, but it feels like it's accelerating at a breakneck pace.
Well, I think so. Let me ask this question. Yeah, I'll follow real quick. With all of this brick breaking that technology, it also increases the the possibility of scams, of people being tricked, those kinds of things. And so my question again to you is like, what are some of the things that we should be watching out for or thinking about as we are embracing or engaging with this new technology?
We have to be increasingly skeptical of everything in one way or another in my viewpoint of the world, right? Any technology will be used for
ill and ill begotten means to crime and scams and those sorts of things yeah um it's that's been true of every technology since the beginning of time also right someone will use it for something that is not what you know societally accepted um in a way that that is going to be there so it will accelerate also it'll become a more dangerous isn't the right word but it'll be a more um
We'll have to tread more carefully in all of our interactions to really get to that. I mean, even today, I don't answer phone calls if it's not in my contacts because it's generated by some technology. It was pushed to some human to give me a call or a robot or something to have an interaction. And so I'm skeptical of any phone call I get any increasingly texts as well.
You know, things there slowly but surely people are creeping into all those things. Social media, it's everywhere and it will only accelerate in that way. You know, one of the frightening things from a computer security perspective is. AI can write code and it can be trying to write malicious code, right? That's going to accelerate too. So it's going to be this constant arms race.
If we think about cybersecurity of AI protecting and then harming in a way that's going to be interesting to see how it evolves over time too. So it's this... It is, it's an arms race is the best way to describe it, right? The good forces of good and evil. That sounds like a superhero movie, right? That are going to be in constant conflict going forward. So let me ask this in the news.
I was just reading a day or so ago, President Biden is looking at, and I'm reflecting on the politics of it all now, it's looking at some kind of potential federal legislation, I guess, in order for us to better be protected, at least theoretically, around AI. So let me get your thoughts about the, I guess, the politics behind all of this.
Oh, wow. I have to land mine. I don't think I'm going to step on it. But I'm going to step on it anyway. Carefully. It inevitably will have political implications, right? But it goes back to that social media thing too, right? And then your belief or disbelief that the government is here to help. Yeah. But I do fundamentally believe and I did some travel this week.
And before I traveled, I filled up my car. And the one thing that struck me filling up my car was the little badge on the thing that says this was assessed for accuracy. You pick up a box of crackers, there's a nutrition facts on the side. People are like, why don't we have nutrition facts for AI or computers or things like that? Those are all government mandated things.
Now, so many of them have been faded away into, I forgot the government did that. But that sort of thing is omnipresent in our lives, right? EHRs in their way, from an informatics perspective, have become... the arm, the long arm of the government, into clinical practice, right? Because in the United States, 50% of payment for health care is still from the government.
So they have an overweighted interest in this from a budget perspective, right? So it is, government will have its hands in it for various, various reasons, for good, for financial, fiduciary responsibility, for business, all of those things and you will fall on different, different sides of the spectrum of that.
Yeah.
Let, let's all be Liberty and free or let's all lock it down. Um, and that's true all over the world. You know, one thing I've had the pleasure of having in my career is seeing how various societies think very differently about healthcare. Um, we're, uh, in the most recent one that is, is very interesting from an EU perspective is that, um,
Sweden has very, very, very restrictive privacy laws on health care data. And that example, in bringing it to the United States, you can learn from and you can learn in both directions, good and bad. One of the things that in the United States or the way I think about health care is I see a psychiatrist. That can affect their medical health, and I should know about that and vice versa.
But in countries like Sweden, and I don't know the details, but a patient could say, I don't want my medical provider to see my psychiatrist stuff. So as a provider, I could say, wait a minute, you've unintentionally harmed your own care. for the sake of privacy. That's mandated by law, which is political. So those sorts of things, it's inevitable.
And there has to be, there will be a great debate on the less or more, as has always happened with politics, laws, and the societal approach to privacy. being monitored or not monitored.
So it's very philosophical. I think it's a function of the concept of regulation. And how is it that this AI should be or shouldn't be regulated in the gestalt of it all? Or maybe as you go down the funnel, as you become more specific, how is it that it might need to be regulated of some sort? And I'll tell you, talk about a complex question.
You could probably ask AI that question in and of itself.
I think it's an inevitability it will be regulated.
Yeah, yeah, yeah.
It's inevitable.
Just as we go down the line. Just to what degree? Clarence, last words.
You know, this has really been interesting. I would love to have another hour with you. Or more. Or more. But I would actually love to have that background you got. You know, I mean, those clouds remind me of our world right about now. Everything is rumbling. But I do thank you. I thank you for this. I thank you for answering my questions.
And I really believe that our listeners will have an opportunity to learn something from you and to enter into this conversation in a much more informed way.
Thank you for having me. I hope so. I hope it was informative.
Yeah. Let me just ask this one last thing. What do you want to tell the public? I mean, about, I mean, what's, what's a takeaway that the health chatter audience here should know? Like, you know, don't be afraid of it or, you know, it's here to stay, but embrace it. Or what is it that they need to have?
Yeah. Learn about it.
Learn.
Know about it. And then you can form your own approach to it. Yeah. Your own understanding of it. Without that basic understanding and the implications of it, I think that would be my biggest advice is learn about it. You have to learn.
Yeah. Well, I so greatly appreciate your insights. I learned a lot just... listening to you and in, in hearing your, your AI responses. So it's still human. Yes, it absolutely is. So, so thank you very much. So for our listening audience, keep hell chatting away. Our next show will be on ready spirituality and health, which will also be an inter very interesting subject. So everybody so long.