
DeepMind’s proposed solution is to create an indelible data log that can’t be tampered with. It would show when a piece of data was used, and for what purpose. Importantly, DeepMind itself wouldn’t be able to modify logs to use the data nefariously. The solution bears resemblance to the “distributed ledger technologies” or “private blockchains” that the financial world has been trying to create in recent years. While loathe to call it “blockchain”—DeepMind prefers the term “verifiable append-only ledger” to describe its health data system—it is interested in one property that the technology can confer upon its users: trust.
While the banks want blockchains to slash back-office costs while staying compliant, DeepMind needs blockchains to shore up public trust. Last year, DeepMind’s work with the UK’s health service was dragged into the public by a New Scientist investigation. The publication found that 1.6 million patient names, addresses, and other information from three London hospitals had been shared with Google’s artificial intelligence subsidiary. It triggered an investigation by the UK’s privacy regulator that is ongoing. DeepMind and the hospitals say they followed the rules.
Huge data sets are what make artificial intelligence work. For DeepMind, access to a trove of national heath data could give it a significant advantage in the race to develop AI techniques for healthcare (although it says the Streams app that it’s devising with the three London hospitals doesn’t involve AI). Nonetheless, DeepMind needs to assure the hospitals—and the public—that it’s handling sensitive medical data safely. “We hope that by building tools like this in the open, we’ll improve the level of trust that patients have with respect to this data access,” DeepMind co-founder Mustafa Suleyman says.
Read the full article on Nextgov