Too Much Data for a Human

I've been collecting a lot of health data via wearables, starting with a cumbersome setup between a Nokia phone and a Garmin HR meter, and a silly-looking MyZeo sleep monitoring band. Eventually, I moved to frictionless recording with an Apple Watch coupled to a Polar HR meter and real-time glucose monitoring with the Freestyle Libre.
As data collection became easier, I accumulated more and more data. I realized I had so much data that I couldn't make sense of it (ICT&Health, Voedingsadvies loopt in de soep, Feb 16, 2022 - Dutch).
In November, ChatGPT launched, and I thought it would be great to train a system on my data to help make sense of it. Ideally, I could interact with such a system, and it would coach me on my health-related questions.
Enter the PLLM (Personal Large Language Model).
Part 1: The PLLM
With only basic knowledge of LLMs, machine learning, transformers, and vector databases, I decided the best way to learn was by building something. I aimed to develop a system for personal health coaching, integrating both static medical information (evidence-based & best practices) and dynamic personal health metrics (wearables data, nutrition, and lab results). The system needed to run on my laptop for privacy, not require state-of-the-art hardware, and be open source.
After months of study, failed attempts, and trial and error, I achieved reasonable results using GT4ALL loaded with Llama 3 instruct, combined with my local data. I could ask questions and receive mostly usable answers.
I then started sharing this pet project with others.
Part 2: The multi-applicable PLLM
This part developed in unexpected ways. As I spoke to people, some had developed similar PLLMs for personal development or mental health. Someone noted that the system's multi-modality makes voice input ideal for food journaling. Another suggested incorporating feedback loops in the PLLM.
Others pointed out that I was thinking patient-facing, whereas HCPs (Health Care Practitioners) -who are also confronted with too much data- could use and interact with the PLLM. This made me realize my PLLM was limited, offering advice from a single perspective, while I wanted expert advice from several specialized PLLMs.
Part 3: The multi-agent PLLM
What if my PLLM consisted of various agents with specialized knowledge interacting to give the best advice? Multi-agent systems generally outperform single-agent systems in LLMs, enhancing problem-solving, efficiency, scalability, flexibility, and modularity. They use specialized agents for different tasks, optimizing execution and simplifying development.
Imagine a nutritional agent, a physician agent, a sleep coach agent, and a physiotherapist agent discussing my questions, questioning each other's advice, and then providing a recommendation. This approach would enhance the quality and accuracy of recommendations.
Your doctor could prescribe specific agents for you, such as a physician and a dermatology agent with expertise in dark skin, or a physiotherapist, sport coach, and nutritional agent for 50+ amateur cyclists. You get the point.
Imagine how such a system could increase access to healthcare, address diversity and inclusivity, and provide everyone with access to quality advice.
I'm hoping to finally get insights and advice from my HUGE pile of data and am excited to discuss possibilities with you!
PS: I will disregard any remarks about endangering HCP employment. Even in the best scenario, this multi-agent system would alleviate only 5-10% of the current burden on the system.