According to CNBC, the company was as recently as last month talking to the likes of Stanford Medical School about setting up a data-sharing agreement for a research project with a focus on heart disease.
“This work has not progressed past the planning phase, and we have not received, shared, or analyzed anyone’s data,” Facebook told CNBC, which reported that the plan was put on hold following the Cambridge Analytica data privacy scandal, and the subsequent revelations about Facebook’s data-sharing practices.
The plan would have seen health organizations hand over patient information with key details, such as the patient’s name, obscured. This information would have been matched with the patient’s Facebook records to see if there is any information in there that could help treatment—for example, if an elderly patient doesn’t seem to have many friends, they may require more at-home care following surgery.
According to the report, the matching would have been achieved through a technique called “hashing.” This essentially means using an algorithm to scramble a piece of data, such as a name, in a way that means no one can unscramble it—but if another piece of data is run through the same algorithm and you end up with the same garbled code, then the two pieces of original data must match. This is a “pseudonymization” rather than an anonymization technique, as it does leave data open to re-identification.
It’s not difficult to see why Facebook has paused this plan. The company is still reeling from the aftermath of the Cambridge Analytica scandal, and the blows keep on coming.
On Wednesday, it admitted that the political consultancy may have gotten its hands on the data of 87 million Facebook users, rather than 50 million as previously thought. It also revealed that most of Facebook’s users probably had their public profile information scraped by malicious actors who used a tool that the social network has now removed.
“What we didn’t do until recently and what we are doing now is just take a broader view looking to be more restrictive in ways data could be misused,” Chief Operating Officer Sheryl Sandberg said Thursday, in one of her first appearances since the Cambridge Analytica affair blew up.
Meanwhile, Facebook Chief Technology Officer Mike Schroepfer told the Financial Times that the social network was now “being much more diligent about trying to understand upfront all the misuse and bad [use] cases” before it launches new products.
On top of all that, TechCrunch reported Friday that Facebook had removed messages sent by CEO Mark Zuckerberg and other top executives from the Facebook inboxes of their recipients, without telling the people whose inboxes were affected. And civil society groups in Myanmar have hit back at Zuckerberg’s claim that the company is able to use monitoring to stop hate speech messages spreading like wildfire through its services—they say such messages spread for days, leading to violence.
Even without Facebook’s current predicament, the history of big consumer tech companies dealing with medical data is not without blemish. Google’s DeepMind “artificial intelligence” operation made a deal with U.K. National Health Service (NHS) hospitals in 2015 that gave it access to the medical data of 1.6 million patients, in order to figure out ways of better monitoring kidney disease.
The British privacy regulator last year found the agreement was illegal, because patients wouldn’t have expected their information to be shared and used in this way.