Faces of digital health

View Original

Why Should We Care About Open AI in Healthcare? (Bart De Witte, Hippo AI Foundation)

In healthcare, AI development is still in the early stages. Many regulation-related questions still need to be addressed. It is not easy to create regulation, because it needs to take into account all sorts of aspects: safety, trust, values of the environment it is designed for.

Bart De Witte is the Founder of Hippo AI foundation - a non-profit organization that fights for making medical knowledge openly available and AI-based healthcare a common good. This is a diametrically opposing approach to the direction of current medical AI developments — the majority of which focus on the privatization of medical knowledge. Below is a discussion about what exactly does it mean to have open AI models, how can we create an environment to support that, the state of AI regulation in Europe, and more.

Listen to the podcast in iTunes or Spotify.

See this content in the original post

TRANSCRIPT:

[00:02:01] Tjasa: Bart, most people know you have been fighting against knowledge privatization in healthcare. You were a strong proponent of open data and open algorithms to beat the inequality or inequity to care. For those that do not know you, can you elaborate further on what exactly you and the Hippo foundation are fighting for?

[00:02:25] Bart: I used to work for big tech and I completely changed directions. I founded a nonprofit because I wanted to try to solve the problem of high term information asymmetry. I believe that lifesaving knowledge should be open and accessible to all two should be no asymmetry to access that knowledge. So I've set up an organization named after Hippocrates who was the founding father of modern medicine, but [00:03:00] also he was the founding father for ethics.

[00:03:02] One of his principles was that physicians needed to share knowledge with their peers without any economic interests. And that's what I'm trying to up. When we develop such an important technology as AI

[00:03:16] Tjasa: What caused that shift in your thinking? Going from no, the industry to a non-profit organization.

[00:03:23] Bart: I'm an early adopter of technology and in 2009 I think I was tracking my sleep with a device called Xeno which was a startup. I have been visiting Silicon Valley since 2004 where all these health innovations using internet technology came up. And then which had a sleep tracker.

[00:03:41] I was measuring for one and a half years, my sleep giving really sensitive information. And then when the company went bankrupt, I started to read the terms and conditions and discovered that all that sensitive data was an asset that could be sold. Bankruptcy. Then I found the research that in normal cases, that data is being [00:04:00] sold to 50 different data brokers.

[00:04:02] There was the first time I experienced I was not able to control my sensitive data anymore. The second one was my experience with 23andMe. I joined in 2009, which was really early. And I paid quite a lot of money back then to use that service, even though I wasn't a product. And then in 2014, they changed their business model by selling that data to the largest pharmaceutical companies for a three-digit number of millions.

[00:04:28] And I said whoa, Ooh. So then I became from being a customer, a product. That led me to think that we need a different way of building businesses and the normal internet business model in healthcare. Where you monetize some data and create information asymmetry is perhaps not really the right basis to start with because it's a principle or a system that is designed for inequality.

[00:04:53] Now of course, GDPR helped us to overcome part of these things. A few years ago, Google bought Fitbit. I used Fitbit for two years. I never received any email that my data has been transferred to Google to ask me for approval, what normally according to GDPR should happen. There's a lot of data accumulations going on that follow capital concentrations. And my biggest fear is that we end up in a monopoly of our life-saving knowledge in five to ten years.

[00:05:20] Tjasa: Can you talk a little bit more about the term that you often use and that's data colonialism? How does it go into the story that you explained so far?

[00:05:30] Bart: Yeah, data colonialism. I discovered it in a book The cost of connection from one of our advisory board members, professor Nick Couldry from the London School of Economics and Political Science. He coined that term because he compared the ongoing invisible war for digital territory with that. What happened in colonialism? That means that there is a appropriation of an asset and the essence of data by capital. So that means organisations just as in the colonial time were able to access our resources, which is now data, and that is being appropriated and privatized. [00:06:03] This causes further symmetries and dependencies. And that's what happening when for example Google goes to India and offers the AI service for people. We have no other choice of using the service. Ophthalmologists are using the AI service of Google, but the data itself is extracted out of India.[00:06:23] And the value creation is in mountain view, California, not in India. And if you compare these two systems, the historical colonial system, and that's what happening here, there is an analogy here that helps people to understand that if you want to design digital systems, we should do this in a de colonialized way and a much more sustainable manner.

[00:06:46] Tjasa: How do you see that would be possible in the future? Given that at the moment even if we are asked for a consent about our data and I speak broadly when saying that, it's not like you really have a choice, you either comply with the terms or can't use the service. How do you see that translating to healthcare where digitalization efforts and applications are still in relatively early stages?

[00:07:24] Bart: Yeah, I agree that it's still early stage. And I agree that people tend to accept terms and conditions. I'm thinking there are no alternatives that disagree that there are no alternatives because if you look at the communication tools that we use, there is WhatsApp and Signal and Telegram and people, although they are not agreeing with the ethical views of the mother company, Meta [00:07:46] from WhatsApp I know that perhaps that company is using meta data that connects WhatsApp data and metadata with other other platforms, they still don't want to switch. And the main thing here is convenience. And I think what we need is more literacy.

[00:08:03] We need data literacy, we need digital literacy, but not in the terms of that we create better consumers. We will need to educate children and even the political decision makers, because that's where perhaps most knowledge gaps are that There are differentiations to be made here.

[00:08:22] And that we need to really watch. Which kind of terms and constants we give. And that is a wide, very wide discussion because constant is given when you download an app and you accept the terms and conditions, and there is a part of therefore your data consent, but consent is also given, for example, in healthcare where you say ‘Do you want to give you a data for research’, but research is such a broad term because all AI is research it's data science.

[00:08:49] If that data science leads to IP or private services, that is not a differentiation is not made. And mostly these concept documents also included [00:09:00] physicians should not patients should not participate in financial gains of the outcomes of the data. Which is okay, but still there are alternatives out there and I'm trying to build up such an alternative.

[00:09:12] And I believe that the further we will progress that there will be more demand for that alternative. And the reason why I believe this is that for the simple reason, if you equal data to capital data gets scarce because the really good golden data sets are being sold sometimes really exclusively.

[00:09:30] If we know that the Fitbit data is gone, but most of the deans that I know for medical universities in Europe, have confirmed that they last year got offers from Google. Very lucrative contracts where Google was asking them to access pathology data, but the contracts being made would have been to not allow the data to be used for other AI modeling.

[00:09:51] So it was an exclusive contract. And the more this happens, the more data is going to be scarce. So people are going to look for alternatives. There was [00:10:00] also a recent research report from a Price Waterhouse Coopers last year. The startup monitoring Germany and the Albany confirmed that thirds of the startups don't have access to data. They that is a problem that will continue to grow if we continue that path of data monetization.

[00:10:17] Tjasa: And what is the alternative that you are trying to build? So recently the Hippo AI Foundation formed a the partnership with the Confederation of laboratories for artificial intelligence research in Europe () Claire), with the aim to facilitate the creation of a world-class open medical AI ecosystem in Europe. So let's try to explore that a little bit. How could an open medical AI ecosystem look like?

[00:10:46] Bart: Yeah, that's the problem that I tried to solve. So like, how do you build an alternative system? The problem is here in the eye that you have two layers, you have a content thing, which is data. [00:10:57] Then you have the logical layer, which is the software quality [00:11:00] algorithm. And then most open source licenses that are already on the market -it's not really a market bit like out in the public domain - don't connect those. Like they are open source database licenses. They are open source, [00:11:13] but there is no open source license for export. So that's what we did: we said if we want to create radical open systems data needs to flow. And everybody in Europe talks about data flowing and interoperability, but they forget part of the economics. If you can have a super interoperable systems, but if data has value, nobody will share that data because it's going to give them an advantage.

[00:11:37] So to do this we created a new license model - the Hippo AI license. That is the license that we put on the data that we publish. And that means that all the derivatives of that data that we publish needs to be published on that same license. That's a copyleft license. That's well known and well used and successful, but what [00:12:00] we added is that all the AI models trained on our data needs to be openly public and the learnings needs to be shared. That means that those who use our data are by license obliged to always share the gains that they find out of that specific data. And that means that you create an open ecosystem where perhaps hundreds of companies are not competing for that life-saving knowledge. [00:12:29] Kind of following an approach feature called shared R&D. Open source is nothing else than sharing you R&D cost and putting your resources to get it in a pool in the project where you reduce the R&D cost, and you share the findings. That's the technical way, how we want to approach this.

[00:12:46] And there is much, much more dimensions because we need concepts. We need patient awareness. We need industry supports or I've put a list of a lot of people. [00:13:10] And we convinced Astra Zeneca to join our breast cancer project that will be communicated more loudly. But we first want to achieve results, but they are supporting our project and they will open up data that they never opened up before. It is actually a reinvention of a principle that our European health decisions is based on.

[00:13:31] The principle is solidarity. Without mutual sharing there is no solidarity. And if people ask for data donations, but that company doesn't share that same kind of data back then that has nothing to do with donations or solidarity that is, has to do with appropriation. And that's a very different term. What we actually are doing is understanding the cultural [00:14:00] system that we are moving in health care in Europe is based on solidarity. Physicians have always published their research openly prior to these licenses of publications coming up. [00:14:12] But we are trying to build a system that is much more embedded within the value system of the European health care system, as opposed to a value system that comes from a very different region that is based on let's say fair capitalism, which is not something that is combinable with healthcare for all principle.

[00:14:32] Tjasa: How do you envision that AI experts, practitioners, researchers, and entrepreneurs will join your foundation and the collaboration that you have with Claire, the open ecosystem? How can you even surveil that everything is shared back in terms of algorithms that are produced based on the available data?

[00:14:52] Bart: We don't want to become a surveillance company that has a main task with severe license abuse. There is forensic technology that we have been [00:15:00] looking at, like putting a watermark in the data that is invisible. So at the extracts of certain algorithms can be detected by data leaks.

[00:15:09] Then we see that somebody uses that data to train their models. I've been a product manager of big tech companies myself, as a PO as a product manager, I worked 10 years for IBM. And I can tell you one thing, if you want to put a product to market these companies have a lot of compliance processes internally, and there will be no product being brought to market that abuses any licenses. [00:15:35] So the idea that these companies will abuse open source license is absurd because the risk that these companies would take, if they abuse licenses, and you can prove that you could win quite a lot of money from them and bringing them to court. That's too much of a risk and these companies are not crooks in that sense.

The [00:16:00] second part that you asked your question is how do you convince people to join? That was very interesting to see you in COVID that there was so much open source development. Even the sequenced COVID virus was open data by the time. [00:16:14] And without that open publication, we would not have seen that fast development of vaccines. There was a lot of collaboration and analyst is a research paper suggested that that open collaboration was accelerated because we had a shared purpose. We had a shared enemy and people used tens when there's a chaired enemy to collaborate better. [00:16:35] During COVID the virus was the enemy. The virus was attacking our freedom or lifestyle. And that's what united people now to bring a movement like people AI forward. It's a lot about really creating narratives and creating a joint vision of what is that joint enemy and what is that, what drives us together?

[00:16:54] And I think that joint enemy uses for me those pieces. We don't like to see [00:17:00] health inequalities being the standard. We have seen that with mRNA vaccines how unequal this world is in healthcare but I think when we go digital, I can have a zoom call with anybody in Africa and in Japan who has an internet connection.

[00:17:15] As we progress in digitalization, we should redefine how we create value in that sense. And I think that's what will unite people that have that focus and I'm not doing this for the generation that is mostly 60 year old plus when there is a generation that is called the purpose generation, that is the millenial generation, that stores will have been pushing for a climate.

[00:17:37] They are much more purpose driven. They are much more looking at sustainability and I think that generation will also shift and push healthcare into a very different direction.

[00:17:49] Tjasa: I actually intentionally used the term surveillance earlier because one of the topics that is discussed a lot in the [00:18:00] recent years is also the surveillance capitalism the tracking of everything and everyone for gaining profits. [00:18:07] And I wanted to transfer that to the field of healthcare. There's the expectation that with AI, we're going to have a more optimized systems. You can optimize processes. There are so many inefficiencies that there is no doubt that these kind of approaches are needed. But if you look at the doctor's perspective, now being a doctor or being a nurse means that you are continuously monitored. [00:18:33] You get reports about your performance, and that is making the profession that should be very humane very unattractive. It's very burdensome and we already see that clinicians are leaving the medical practice because of this as well on top of all the other difficulties that the job presents. And at the same time if we are dealing with the healthcare workforce shortages that are really becoming [00:19:00] a greater problem every day, I wonder to which extent can adding more unfriendliness in the surveillance sense. How can that hinder the challenges that healthcare currently has?

[00:19:14] Bart: Surveillance in that sentence is not bad in healthcare because public health systems are being monitored all the time. And so being surveilled, it's a combination of surveillance and capitalism that is perhaps the term that we are discussing here. And if surveillance leads.

[00:19:29] To again, that term, that always called information symmetries, which comes from Joseph Stiglitz, who won the Nobel prize for economics. He coined that that this is how companies create economical value in that sense, because more than the others, that's how you bargain. [00:19:45] That's how you deal. But if you use what somebody called God-like technology today's technology is so powerful that you can abuse any kind of human weakness.

[00:19:57] And and with social media, this has been [00:20:00] the attention. So you want to hack the attention of a human, so you can bring him as much to you app as possible, so you can sell more advertisements. And that means that these information as asymetries are being used to serve the business model to create these power asymetry, to spin the client and the provider of the services. And that has completely gone out of hand in social media. And I think it will even going a factor 10 dimension higher once we entered the metaverse of Facebook because the paterns were just being monitored and they are looking at all kinds of biometrical data, even voice analytics and face of analytics will be used to monitor you weaknesses.

[00:20:46] And that gives a company a lot of power. Do whatever they want to do to serve at the end, their first purpose, which is increasing shareholder value. And I think in healthcare surveillance [00:21:00] for doctors as well, they are implantable defibrillators. There was a guy, a patient, I think, 10 years ago, he was the first person that was fighting to get access to the data that he's implantable fibrillated produced, but the company Medtronic that brought this to market, I close that information and share that information only with the physician. You could only get access to the translated information the physician was telling you.

[00:21:29] Surveillance systems are not new in healthcare. And I wasn't quantified selfer. We surveilled ourselves, we monitored our data at the time. In 2009, we started quantify itself. We did everything on Excel sheets and wrote out numbers. So surveillance is about learning to numbers and optimizing yourself. [00:21:48] There is a huge promise in that, but as soon as you connect it to economical value that's where things might go wrong. Even if the founders have really idealistic ideas at the end, the [00:22:00] investors will push you to bring this tool too much more profits.

[00:22:04] And I know in healthcare is destroying being used the Uberification of healthcare. Since a while everybody was excited. We are going to Uberfy, a healthcare because it's much needed now my friends in the valley in San Francisco, they said I got the Uber rides at the beginning when they started $15 now for the same route, I have to pay $75.

[00:22:25] And that's because Uber has a lot of information. Information asymmetry can control supply and demand, and they are just using that power to increase prices and if we think about visiting the healthcare, I think this can go absolutely wrong. Because if you look at the US for example there is an information asymmetry protected by patents, even when it comes to producing insulin.

[00:22:48] Now, the price of insulin went up 1200% in the last 20 years, and it's one of the old patterns. So there are things that in healthcare should not. I brought to that same [00:23:00] model where the information asymmetry that you create by surveillance will lead to more capital gains because if you do this hell breaks loose.

[00:23:10] Healthcare is not a market in itself regulated in that sense. We see this in the US. The total cost of healthcare is 18% of the GDP. 40 million people don't have insurance, 50% of the personal bankruptcies are created by healthcare costs. So the market doesn't self-regulate and I think that's the specific of healthcare and we really need to watch out if we don't replicate that business model, where again, data collection or data recommendations, follow capital concentrate. [00:23:43] And that leads again to information asymmetries, which over time leads to power asymmetries, and abuse.

[00:23:49] Tjasa: A lot of that can be addressed through regulations. And I think the example that you made with drug pricing is a very evidently different in Europe than it [00:24:00] is in the U S and that's because the regulation is so much different here than there, but without going into details, I do want to pick your brain on what you see that is happening in the field of AI regulation in Europe in the last few years. [00:24:17] Obviously Europe as other markets would like to be a leader in AI development. Obviously we are far from that. So how do you see the regulation that's currently in place? What do you see are its strong points and its weaknesses? The thing that's often mentioned is that the regulation in Europe is much too rigid for innovators.

[00:24:42] Bart: Yeah, I think it's true. If you look at the the state of AI in Europe, and you look at, for example, the number of referrals, citation papers and research in AI. And the last AI index that is published by Stanford showed that last year China overtook the US and Europe, [00:25:00] lags behind it the number of [00:25:01] research papers as cited data. So that tells something about the quality of the data. So Europe is not catching up. China is accelerating in that area of the most papers that are mobile using novel techniques that I read come from China. So there is this race going on. And that's based on the first the research.

[00:25:22] Secondly, the second number then is always, or characteristic is used, is to look at the number of investments. And then they tell that Europe is short-falling by 10 billion in the number of investments. And it really, when it comes to investing in AI, I find this extremely difficult to only look at these two criteria when we are talking about so such an important technology that will shape our future.

[00:25:47] And that's what Europe has been doing. They have been looking at a lot of domains that connecting that fundamental rights to our values and yeah, nobody likes that, but do we really want to make the same [00:26:00] mistake as we did with social media where GDPR came 10 years too late. I think we need to even accelerate in standardizing it, setting a framework.

[00:26:09] European union has been working on that. Is it perfect? No, is it, if it's done, probably not. There are two domains of regulations. One is the data regulation and the other one is the AI regulations. And I've contacted both teams in the European commission who actually [00:26:25] surprisingly these teams don't really talk a lot to each other. So the data strategy and the AI strategy is not really an aligned strategy. And let me bring a bit the focus on the regulations of data strategy. There is this data governance act. It was released in November last year.

[00:26:40] And there are some concepts in there that are really promising. So for example, there is a concept of data trust - creating companies that actually collect data and then a trusted organization 40 citizens, 40 users. And so that data is not being abused. [00:27:00] Within these data trusts, there are different models. [00:27:02] They are data trust as a data marketplace and there are data trust for data altruism. That's where my Hippo AI foundation fits in. So we are a data trust going to go collect data. We received it for free, and we put that we United data. We clean the data, we put that license on there and that's purely altruistic driven as a data first for data altruism.

[00:27:26] These concepts are all out there on the regulatory side it still not quite clear in healthcare. What is going to happen? And there were still quite uncertainty and uncertainty is definitely not a positive factor for further investments. If things are not regulated, but it's going to be regulated and there are no standards, they're going to be standards [00:27:47] you tend to put your investments on hold, and I think we really need to start working on standards. and what we should avoid is trying to catch up with the US and China and putting ourselves as [00:28:00] Europe versus China versus the US because we are not comparable. We have, when it comes to healthcare, we have different fundamental rights [00:28:09] Like the data protection rights is one part, but we have article 25, everybody in Europe has the fundamental right to get access to healthcare, which is the unique right that we as European citizens enjoy. Not everybody else has that. So that's a right. That needs to be embedded perhaps in that regulatory framework.

[00:28:27] There are other values like solidarity, which I mentioned before, which is the principle where our systems are based on. The problem is that there is a huge influence in that agenda by big tech. There was a research paper published two years ago that looked at the funding from ethics research ethics in AI. [00:28:52] And they found out that 58% of all the ethical research in AI is directly funded by big tech. In Germany, we can [00:29:00] see this because the technical university of Munich has a research center for AI in is 100% financed by Facebook. For me coming from healthcare, that sounds like a American tobacco financing, public health research. That doesn't make any sense. [00:29:14] So I think there was a huge lobby for big tech that is influencing. So it is not perfect. It is a compromise. What I'm missing is that the physicians or citizens, or. Much more woke to demand our own rights in this. I've been very active when whoever I have, there needs to be much more NGOs and other organizations active in that field to perhaps define a very different strategy for Europe.

[00:29:41] And my hope is that if we are forcing standards, then my hope is that we enforce open standards. That means. If there is an RFP for an AI solution in the hospital. Within the RFP, you can say we want this open standard. We want you algorithms to be open and we can [00:30:00] define this. There is nobody who can forbid is that we want transparency.

[00:30:03] And most of all, in science, we want the ability to replicate that science that is by peer reviewing. And you can only be review something if it's open. And if we put this as a standard. We will advance much faster because openness accelerates, but at the same time, we protect our market from other providers. [00:30:24] We have made a business model out of these information asymmetries and what I call this model of artificial scarcity. So there is still a lot of moving elements in Europe, but I'm quite positive that there is a strong will to go a different way. When it comes to the development of AI, that different way will be unique and embedded. with our value system that we have.

[00:30:49] Tjasa: It's a very complex problem, because as you mentioned yourself, it all starts with data standards and we see that across Europe a desire or not even the desire, the realization about the positive effect of open standards is visible. The former health minister in the UK, Matt Hancock said last year, Data should be separate from applications. We see that Catalonia is taking approach with open data standards, there's projects in Germany and all across Europe. In that sense, things are seem to be improving. The basic component for AI development are, is good data. And if you have the same standards or the same structure of the data, then you can develop those those algorithms. But just because somebody is using open standards, that still doesn't mean that, the data is easily accessible. So how do you see that, access to healthcare data for research purposes could be changed in the future because we need large amounts of data?

[00:31:56] Bart: Yeah. Interoperability and standardization of data is not going to solve anything. As I mentioned, because data will only flow if you demonetize it. If you get to a demonetization capital gain out of keeping the data. [00:32:11] Why should a company share it? There was no benefit in the country. You will only lose. And then hoping that companies will become altruistic then you didn't understand anything of markets, markets don't function that way. So it's really about economics here. If we want to solve it and the economics here is really, that's what I'm trying to do. If we are publishing data sets under the Hippo AI license, which will be released this, then we get an ecosystem which is different.

And this sounds for a lot of people crazy because like, why should we do this? Well, I have been working 20 years in the software industry. I've seen this all happening before. [00:32:52] In 1990s, there was a huge monopoly for Microsoft that was [00:33:00] creating software that only run on their own Microsoft web service. So the idea my former employer could not tell him if the hardware anymore, because it was the open standard by Microsoft. It was all proprietary. And, but as Microsoft and IBM used to partner, like IBM never looked at developing their own software. [00:33:18] So suddenly they were detached and Microsoft had over 60% market share of web service. And then IBM decided how do going to solve is how can we sell more? How do we serve as in they started to support the Appache foundation. They had a hundred developers supporting that open source movement. [00:33:35] And that's where open source started to really become an important tool to create accepted open standards, because there are different ways of standardization, but this is really a standard like a Appache foundation or Linux exists and over 90% of the web service from the incident already on Linux. [00:33:55] So these kinds of became standard. So I think we really need to [00:34:00] de-economize these principles or use these open source learnings for the last 20 years, because open source is not about destroying the economical value. Open source is I see this as a a smart way of doing R&D and it's called shared R&D [00:34:17] and the economical value creation should not be about owning life saving knowledge in that sense. Anybody who has built a medical product and brought it to market knows how much effort you need to bring that product as a certified product to market. There is so much more than meets the eye to bring a solution to markets. Where AI is just a component of that. So if you can share that as a piece of software with others and you don't make it as something as you will differentiation like Google is doing, and that's the only USB they have. It's their algorithms. When you open this up you tend probably to reduce your R&D costs, accelerate your innovation and sets up[00:35:00] reference standards that people will be used because it's a, at the end, the best model that will win. And that's something that is for a lot of people, hard to understand, because I think open source is all about destroying economical opportunities. We have seen this discussion on the vaccine level where the governments or some people were asking [00:35:21] to open the patterns of the vaccine so it could have solved this on a global perspective. It didn't happen, but I'm really happy that the team from Texas now I created a first open source vaccine that is now being tested in India. So these movements are coming these open source movements because it's all about collaborations of people that share shared purpose [00:35:46] and they are globally co-collaborating in an altruistic mode and creating what we call digital goods or digital commons. Just like Wikipedia. It's the same thing. It's a way how we can collaborate and [00:36:00] create assets that are owned by the people for the people and are created of the people. It's a democratisation process.

[00:36:09] Tjasa: Do you think that there's a higher chance for this to succeed in Europe where we still - that's at least my impression - There's, as you said, talk more about solidarity about access to healthcare, and it's not that anything that you think of, the first thing that you would do is get a patent.

[00:36:31] Bart: I don't have anything against patents per se. But if the patents are based on just simple extractions out of data, which is not something that is needs a lot of effort and the data is siloed and it's only because somebody has a huge bunch of capital and can buy the data that he is able to create these patents [00:36:50] then the patent system is failing because patents are really about a human creations, not about putting an algorithm on our data that we produce. [00:37:00] So I think to be able to be successful in Europe and follow these principles of solidarity and article 25 we need much more awareness. We need [00:37:11] even within the political leadership much more literacy that's what I try to do. The other people trying to do that. I think there are some good really progressing developments. There's also more research that can be used for policy-making that kind of give a different direction because at the moment we are just copying a model that comes from Silicon Valley or which is a region that I appreciate a lot when it comes to innovation, [00:37:39] but Silicon valley is not really well-known for being an equitable society. When you look at how many homeless people there are many healthcare is distributed. These are not the values we need to import. I think as long as we remember who we are where we come from and what it is, what makes us different and start building on top of that [00:38:00] and being very conscious and self-confident [00:38:02] about our own way and we can emancipate. I have really good feelings that we can do something much more sustainable, much more innovative and different here to other markets. And at the end, You need products that people love and who are the buyers of tools like AI? Mostly it's physicians and I've talked to many of them. If it's open and if the algorithms are open source and there are prospective studies out there, there is open data, so it's peer reviewed, then they will be a higher level of trust to these applications. And then there'll be a higher adoption and that is not unique to Europe physicians everywhere globally have been telling me that they prefer to use algorithms that are open and peer reviewed and documented with prospective studies because that's just what they have been doing all their life [00:38:56] And on the patient side, you need to currency of trust. [00:39:01] There is a strong need for transparency. We know you probably have asked yourself a question many times when you open up Facebook and advertisements that you about things you just talked about, then you said oh, how did that happen? I never Googled it. And Facebook called it's a conspiracy theory. [00:39:20] There is no answer because you cannot understand it because the algorithms are closed and they are closed because they want to create these information assymetry. So even if you want to go to court and open up these algorithms, it is impossible to open it up. If that happens in healthcare we would go back to the middle ages. [00:39:38] That's what we have since the enlightenment I've been fighting for is open access to information and knowledge. And the free flow of information is a fundamental rights in our democracy. If you stop out these things, we get feudalism what we had in the middle ages. That's what kind of is happening in the U S. [00:40:00]

[00:40:01] Tjasa: You started the HIPPO foundation and the efforts for open data, open algorithms with the initiative called Victoria 1.0. It's basically an initiative to collect breast cancer data and just create as much research as possible there to make care available to everyone. Where Is that project, they, how much data have you managed together, convince stakeholders to contribute?

[00:40:29] Bart: And it's a project that we started as one of our test projects for a hypothesis. We named the first project after a real person, Victoria is a 24 year old breast cancer patients. That came to me when she had cancer. And she said you know what everybody talks about when you get to diagnose at the young age, that you would be overwhelmed by fear and all these feelings.

[00:40:54] She said, like the only sentiment I had, which was the strongest, I felt [00:41:00] connected to all these other women who went to the same diagnosis that I had. But I asked myself the question, what if I would have been born in Africa, whatever would have been born in a country that didn't have access to the diagnostic capacities and the treatments that are [00:41:16] I would probably not be here anymore. And so she said, can I join your mission? And they said okay, let's go for this. And I already played with an idea too. anthromopofize the project means that instead of talking about a database, we talk about Victoria one zero. So the database is carrying the name of Victoria. [00:41:37] And that's really important because we want to bring close to the people and Victoria, now there's a lot of talks and she spreads that knowledge. She became a co-founder and I let her join theHIPPO AI foundation. And so she calls for helping her to let's anybody, any women on this blender to get access to the same [00:42:00] level first, the first step diagnostic as she had, and she had [00:42:03] HER-2 gene amplification diagnosis that she can do by pathology diagnostics. And that's the focus we have. We are building a pathology database that then will lead to the capacity and open AI referenced models that can be used in Ethiopia and other regions for replicating the diagnosed. That is part of the old test procedure [00:42:30] How do you set up campaigns? How do you communicate? How do you scale communications through social media, all these things that you need to question it is not a technical issue. We humans tend to differentiate from animals because we can go to war based on fictional stories. You can create really strong narratives about open developments written by patients that is the goal, [00:43:00] then we try to collect community and that has been working well. So we are approached by leading researchers with golden standard data out of clinical studies, which is the highest level of data you can get. [00:43:14] And now we are in the the project will be accumulating the data. There is a lot of discussions between these physicians because data Intel that data is not really high quality coming from that institute, so you start feeling the pedals between these kingdoms that are there, but that's part of the thing that we need to solve.

[00:43:35] How do you let these people collaborate? How do we agree on what is the gold standard data? How do you redistribute that data? How do you get the bias out of the data, all of these kinds of things. How do you make it more transparent? These are the things we are solving today.