Many people, myself included, had a lot of concerns about the products the company was building and their effects on the world. When you work on a team whose mission is to help other teams make better decisions at lower cost, the aim is to look at the whole system and improve the whole thing.
Let me give you an example. Most "this content doesn't belong on FB" decisions are made by ML, but a great many go to human review. Imagine what that job is like. It's emotionally exhausting, it's poorly compensated, burnout is high.
My team had a model in production where we would use Bayesian reasoning to automatically detect when a particular human was likely to have made the correct decision about content classification, and therefore, if two humans disagreed, how to resolve that impasse without getting a third involved. (And in addition we get a lot more information out of the model including bounds on true prevalence of bad content, and so on.)
Does that save the company money? Sure. Millions of dollars a month. (And for the amateur bean counters elsewhere on this page: the data scientist who developed this model is NOT PAID MILLIONS OF DOLLARS A MONTH.) But it also (1) helps keep bad content off of the platform, so users aren't exposed to it, (2) lowers the number of human reviewers who come into contact with it, which is improves their jobs, and (3) frees up budget for whatever improvements need to be made to this whole workflow.
That's just one example; everything that we did was with an eye towards not merely saving the company money, but improving the ability to make good decisions about the products.
I think you've avoided the original commenter's point completely. Facebook is a net negative to society. There is nothing you can do to improve FB products when the primary mission is to be an addictive ad machine.
>But it also (1) helps keep bad content off of the platform, so users aren't exposed to it, (2) lowers the number of human reviewers who come into contact with it, which is improves their jobs, and (3) frees up budget for whatever improvements need to be made to this whole workflow.
I think reading that this type of solution was created, and person who worked on it was laid off, makes me very sad as a Data Scientist.
I enjoy working as a Data Scientist, but I struggle a lot with the field. Lots of jobs are mostly about grabbing eyeballs or selling something. Some jobs are just total bullshit. Even the ones where you're doing something concrete (e.g. keeping a machine running), some days you still wonder if it really matters in the long run.
But with some of these social media safety topics, it can feel like a job has some meaning beyond just shuffling numbers around on an spreadsheet.
So it's disappointing to hear that people with the skills to create something like that are fired.
Many people, myself included, had a lot of concerns about the products the company was building and their effects on the world. When you work on a team whose mission is to help other teams make better decisions at lower cost, the aim is to look at the whole system and improve the whole thing.
Let me give you an example. Most "this content doesn't belong on FB" decisions are made by ML, but a great many go to human review. Imagine what that job is like. It's emotionally exhausting, it's poorly compensated, burnout is high.
My team had a model in production where we would use Bayesian reasoning to automatically detect when a particular human was likely to have made the correct decision about content classification, and therefore, if two humans disagreed, how to resolve that impasse without getting a third involved. (And in addition we get a lot more information out of the model including bounds on true prevalence of bad content, and so on.)
Does that save the company money? Sure. Millions of dollars a month. (And for the amateur bean counters elsewhere on this page: the data scientist who developed this model is NOT PAID MILLIONS OF DOLLARS A MONTH.) But it also (1) helps keep bad content off of the platform, so users aren't exposed to it, (2) lowers the number of human reviewers who come into contact with it, which is improves their jobs, and (3) frees up budget for whatever improvements need to be made to this whole workflow.
That's just one example; everything that we did was with an eye towards not merely saving the company money, but improving the ability to make good decisions about the products.