This sure is a lot of text, but I have no idea the context of what’s going on and haven’t heard any sort of controversy (though this seems related to _some_ controversy). I’m a little surprised they don’t lead in some of the intro paragraphs with some explanation of what’s going on here for those who aren’t privy to the tech controversy of the week.
Crisis Text Line (a service where people having mental health crises can text for support) had, for years, shared chat data with a private company (which it partially owns shares executives with) for use in developing customer service chatbots. They were always very open about this on their website. The company provided a little financial support to CTL in return. There have been no reported incidents of the data being shared or misused. They stopped sharing the data in 2020.
Politico ran an article a few days ago where they interviewed some people employed in privacy and ethics who all said that it seemed pretty weird to do that, and some volunteers for CTL agreed. CTL pretty much right away decided to ask the private company to delete the data (it's not clear if that's happened yet because there is some separation between the two entities) and promised they wouldn't share the data with private companies in the future. CTL continues to use internal data analysis tools to triage incoming texts. They will also continue to share data with academic researchers and other non-profits on a limited basis.
My two cents: this was a really weird arrangement between a non-profit and a private company. It was the right call on CTL's part to stop the data sharing, it should be a good learning for others, but in the end it's a "no harm no foul" situation. CTL has been and continues to be one of the best avenues for crisis support.
I find this defense pretty uninspiring. It spends a lot of time addressing only partially relevant questions about data custody, but then very quickly glosses over the decision to attempt to commercialize their work through a for-profit entity. There are numerous potential concerns around a nonprofit doing this, with what looks a lot like self-dealing being one of the most obvious, and the article just does not satisfy me that these issues were considered at all.
danah (whom I truly respect and like as a person) practices something Watzlawick (1921-2007) called "Drowning 'duh' in data."
It's an old sociologist trick to seem as if you're talking about much more than you are, a way to turn a paragraph into a PhD thesis and thus obscuring the content and meaning of the paragraph.
Not quite as extensive and a little more to the point are the two blog entries posted by CTL:
> We understand that you don’t want Crisis Text Line to share any data with Loris, even though the data is handled securely, anonymized and scrubbed of personally identifiable information. As a result, we have ended our data-sharing relationship with Loris.
Which is a little bit of a "stop whining, it's not identifying" backhand. That skirts the issue of "we don't want you to make money of someone else's suffering" and the question of ethics review.
If I want to share my data, with other academics or for-profits, I am required to go through an ethics review. danah boyd knows this, she's been in academia long enough. Others are CTL knew this as well. A good example would be the non-identifying data we gather from MRI scans which are used by the makers of MRI software to train a jitter-dejitter algorithm. The checkbox for the sharing of this, non-image, non-personally identifiable, data with the company (Siemens, Bruker, etc.) is hidden behind a "I have ethics committee review and here's the decision's file number and date" request.
CTL didn't get into trouble for sharing "data [that] is handled securely, anonymized and scrubbed of personally identifiable information" but for doing so without independent ethics review. And that's a huge no no in mental or physical health data.
Side note, but the site’s styles seems broke with https for me. Somehow, when people talk about privacy / security / health data and https is not enabled by default, I become skeptical.
tl;dr: danah boyd believes consent is optional, and that those in a crisis can have their worst moments preserved for eternity in service of a nebulous and ever evolving greater good. oh, also, she struggled a lot with the decisions she repeatedly made that allowed this trainwreck to emerge.
Since they continue to share the data with external researchers it appears they continue to fail to see their true failing which wasn't choosing to share the data with a commercial entity but choosing to share it with anyone without explicit consent. Getting explicit consent to share data is difficult and will result in less available data for research, but failing to do so because you think it serves a greater purpose is just paving the road to hell with good intentions.
After the crisis moment has passed send a message to request explicit permission to share the data and unless explicit permission is given within a defined period of time delete the data.
I would say the author believed consent is more complicated than we sometimes say it is and yes they struggled with the ethical implications not only with this decision but other decisions along the way.