When we talk about advances in artificial intelligence (AI) applied to diabetes, we cannot fail to mention the work of Peter G. Jacobs, a visionary who is transforming the way we manage this condition.During the recent IFAC Workshop on Engineering Diabetes Technology, Jacobs shared his vision of the present and future of AI in diabetes, and made clear an essential point: the patient's confidence in technology is key to his success.
Jacobs, since his Aims Laboratory (artificial Intelligence for Medical Systems), has led research that has demonstrated significant improvements in time in the range of patients who use advanced algorithms for insulin management.An example is its KNN-DS project, a support system to the clinical decision that has allowed patients to increase the time in range by up to 6.3%, something that for us, that we deal with constant rises and declines, is a real achievement.
The most interesting is his human and ethical approach to technology.For Jacobs, the challenge is not only technical, but also human.He insists that people should trust algorithms to use them correctly.And this makes me reflect, because when I started with the insulin bomb, at first I doubted how it would work.However, over time, trust and good results made me believe in the system.I imagine that the same can happen with these new advances in AI.
Another aspect that caught my attention is the use of digital twins.According to Jacobs, these virtual models allow to precisely replicate the behavior of a person with diabetes, which facilitates adjusting the treatments in a totally personalized way.This sounds incredible, since each of us is a world, and having a technology that understands how we react at all times would be a before and after.
Jacobs also spoke of a delicate topic: the privacy of the data.In his speech, he warned that many health professionals are using AI tools without considering privacy risks.This is something we should demand: transparency and clear protocols to protect our medical information.
And I could not miss the issue of biases in AI.According to Jacobs, an algorithm is as fair as the data with which it is trained.To combat this, in their laboratory they have created a training program for new generations to learn to identify and mitigate these biases.
💡 The future looks promising, but it will only be possible if we trust and adapt to these new technologies.
I would love to know what you think:
Would you trust an algorithm to help control your diabetes?
Would you like to try a predictive and personalized system?
I read you in the comments!💬💙