I think it's clear from nearly a decade's worth of Facebook's trysts with privacy issues that it is a company that doesn't have a fundamental stance that's in favor of user privacy. Over and over again it has either intentionally built features that disregard privacy, or put in wrong defaults, or simply brushed off criticisms when others have misused Facebook. At this point we should be asking ourselves why we expect Facebook to change on its own.
There isn't a good alternative to Facebook (given the network effect and the legitimate features that Facebook provides), but it's probably important that there be public oversight of Facebook (and any other social media company that becomes large and powerful as Facebook).
We should also be asking ourselves whether an organization that has the power to be so influential in the destinies of nations should be run by executives who see every issue as a PR issue to be dealt with, rather than ones that should teach them fundamental lessons. If the leadership is so reluctant to tackle these issues head-on, maybe they need to be replaced with a humbler set of leaders.
I agree that FB has shown time and time again that they consider user privacy a mere road bump. Pervasive dark patterns. I disagree that specific regulatory oversight is needed. Advertising supported websites will always remain loyal to their customers: advertisers. Users are not willing to pay money and requiring a cover charge hurts adoption. What used to be called spyware is now "analytics." People proudly proclaim targeted advertising as a benefit for users. If we want privacy to be respected we need a right to privacy law. It's hard to be a sustainable business if everyone else is racing to the bottom. I'm waiting on FB's marketshare to tank.
> I disagree that specific regulatory oversight is needed.
- and -
> If we want privacy to be respected we need a right to privacy law.
You're almost disagreeing with yourself unless you see a significant difference between regulatory oversight and privacy laws. Both reflect government authority over Facebook (etc.) to protect its citizens.
Myself, I would be happy with either option. We have neither now.
I don't always want privacy though. Is privacy at odds with accountability?
We were talking about how disclosing salary and compensation information for everyone would help everyone in the long run and some people objected saying they want privacy.
We have a lot of information that some people know but most do not. We have to think carefully what is private and what isn't. For example, your employer knows how much they pay you. I say it isn't private.
There are varying degrees of privacy. Just because you share some information with some people doesn't mean you want to share all information with everyone.
That said, I agree that opaque salaries hurt only the employee as the other side of the market (employer) has full knowledge.
> We were talking about how disclosing salary and compensation information for everyone would help everyone in the long run and some people objected saying they want privacy.
> For example, your employer knows how much they pay you. I say it isn't private.
It's also likely that your employer has shared your salary with other employers, so when you interview for a new job they know how much you're currently making without having to ask. For an example, see:
To be clearer I was contrasting regulations on just Facebook or just social media companies and an informed consent opt-in privacy law covering all personal information. Sector specific privacy laws already prohibit your phone company and bank from using your logs for targeted advertising or directly selling it to the highest bidder. When there is a criminal or civil liability to hoarding personal information that can be abused internally or leaked, companies will think twice before setting up such databases. Of course there will still be the Ubers and Equifaxes of the world but it will shift the burden to the data brokers who will no longer be able to fall back to noting that compiling scarily detailed dossiers is legal.
I think you answered your own question. The reason people expected Facebook to change is because people don't seem to understand that the general rule is that when a large company makes a statement it is invariably based around public relations. They will say whatever they think it is that will receive the most positive response from the population, regardless of the truthfulness of that statement, at least so long as it cannot open them up to legal counter action.
Facebook's behavior is not going to change with a humbler set of leaders. Their entire fundamental business model is based around collecting sensitive information. And they're starting to reach the absolute extremes of market saturation meaning that the only way they can continue to grow is to offer more effective advertising which relies on harvesting even more data from their users. If you want a social media service that respects users' privacy and rights then it would need to be either a non-profit decentralized service, or a for-payment service. The former would suffer from lacking the funds to reach a critical mass, and the latter would fail because nobody's going to pay to join a small social media service - let alone the subscription model that would be necessary.
All I need is a better chat app, that Facebook won't buy. I can now text, voice record, send photos/selfies and video messages easily to my social circles. Connecting to second degree connections is fast and everyone I know has an account.
I want to leave Facebook but I can't give up such an amazing chat experience that keeps me connected to my family and close friends. Even if I got my immediate family to use a different app, I'd have to keep Facebook for old friends :( .
Only using whatsapp without other facebook accounts should still be fine. Whatsapp has end2end encryption (unlike FB messenger) and if you don't use other FB products they can't use your data for ads. Obviously not as good as an independent messenger but will cause the same revenue loss for FB.
I believe that’s only text. Try sending someone a video. Takes a while. Try forwarding that video to someone else moments later. Happens pretty quickly.
Same applies with photos in my experience. I’d assume voice notes,location pins, etc are also subject to this as well.
On iPhones it has a splash screen for compression and then a separate spinner in the chat for the upload progress for videos. That second upload spinner doesn’t occur for immediately forwarded videos.
It's all encrypted. Encryption in group chats is a bit weaker to provide better usability (there was an article recently I believe) but messages itself are encrypted. And since we're talking here about usage of data for ad tracking, any encryption is likely good enough.
They can of course still track meta data, e.g. tailor your ads based on interest of contacts. But if you only use whatsapp (ad free), they own't make any money with it.
> if you don't use other FB products they can't use your data for ads
They're still siphoning their data. Just refuse WhatsApp access to your phone's address book, and see how it outright punishes you for doing that. Most of my WhatsApp contacts maintain their own names in their profiles, yet WhatsApp stubbornly shows their phone number as contact name. I usually identify contacts by looking at the profile picture (or by the conversation history if they changed that recently). Then again, I only have 3-4 contacts on WhatsApp. I would be less willing to put up with this if there were dozens of active contacts in there.
Facebook will buy because it’s profitable to buy. Monopoly power is a known mode of market failure. (Note: you don’t need 100% of a market for the deleterious effects of monopoly power to manifest.)
The only solution is blocking future acquisitions and breaking up Facebook.
Facebook won't buy IRC or any distributed open source decentralized protocol. Chats, newsgroups, email... The net used to be a lot more open and decentralized that it is now, maybe it's time to go back a couple of decades?
Won't they do with decentralized chat what they did with XMPP?: integrate support for a while to keep users from getting away and then drop that support again when the time is right?
How about we stop suggesting a messaging app that styles itself as privacy-oriented and encrypted despite not being end to end encrypted by default, and using questionable encryption even for the optional E2E functionality that nobody uses?
> We should also be asking ourselves whether an organization that has the power to be so influential in the destinies of nations should be run by executives who see every issue as a PR issue to be dealt with, rather than ones that should teach them fundamental lessons.
That's an extremely insightful statement but unfortunately also describes the DNC and RNC, which are already in control of the US.
Tangential sidenote on who is actually in control. This 5 minute snippet from the new documentary 'Do You Trust This Computer?' presents an interesting perspective on how easily FB can be used to influence entire nations. Poses an interesting question of who is driving our electorate.
Seems like if FB acting in it's own best interest to promote user engagement, then it'll be very hard for them to prevent bad actors from weaponizing these very algorithms to influence large swaths of people.
The DNC/RNC really only matter for Presidential politics. The State and county committees are what matter for lower elections, including House and Senate elections.
I would note that the DCCC and DSCC primarily support (or don't) Democratic candidates after the primary. Unlike the DNC, they do not usually play a role in candidate selection. They also don't really try to influence policy; they usually support candidates where they believe that their extra help will make a difference, and provide less support in extremely safe or totally dead races, and not based on how "good" of a Democrat the candidate is.
Yes, as citizens we should do as much as possible to diminish their power.
I've posted lots of comments about removing parties from the ballot process. Let them endorse a candidate, whatever, but private organizations shouldn't have any direct role in determining which names appear on ballots.
We the people should stand up and say, "I will never again vote for someone who doesn't run their campaign solely on public funding."
Who gets public funding? Anyone with X signatures, where X is relatively small.
Having our politicians spend almost all of their time fund-raising, and designing their campaigns and message around improving their fund-raising, is just destroying us.
This is NOT really that big of an issue. We make it this way because USA does not have any serious regulations on data privacy. In most European countries if someone offers your info, such as DOB, credit card balance, property ownership, a prosecutor knocks on his doors with handcuffs, even if temporary. And if you ran major bulk operation, you go behind bars for many years.
No scandal in USA will make any difference if we don't implement draconian laws. Looks what's going on around GDPR; it's not here YET meanwhile everyone knows about it and every email provider shakes his pants. Every company I do business with already send me some sort of acknowledgment or their own policy to accommodate GDPR and these are not even European companies!
I am all for small government, but we clearly need a brand new Department that will regulate such behemoths like Youtube (which has power to squash your first amendment) and Google/Facebook et al that continuously makes money off of their immoral behavior with your information.
I'm no fan of Facebook and deleted my account years ago, but the flood of articles like these that are barely embarrassing since the major Cambridge Analytic embarrassment feels more like an agenda or piling on than newsworthy items.
There are many reasons to reduce or eliminate your time on FB, but this isn't one of them.
The behaviour described in this article implies a culture willing to look one way for users and another way for themselves. That’s pertinent less to users and more to lawyers and lawmakers.
I dislike FB as much as the next dev here, but surely it's a widely accepted (and beneficial) practice that employees have special privileges in the apps they build. Imagine using this argument to say that Facebook couldn't shut down a page/group/account.
Yes, it is technically the same abilities being used. The reaction comes because of how they're used: to indiscriminately delete old messages that may potentially be harmful to themselves while not offering the same options to its users.
On the contrary, everyone's willingness to accept this kind of behavior as normal indicates how far the Overton window has moved when it comes to the trustworthiness of our digital services. We need a collective effort to shift things back where they belong.
I disagree, privacy-conscious people have been warning that Facebook has been doing these things since forever, so people (especially here on HN) can't claim that they didn't know these things were going on. This is just the same old story: people being upset about the consequences of what they were warned about many many times.
I, too, wish for a better quality of collective decision-making on the part of humanity, but this seems to be how "we" do these things. "What, me worry?" is probably the central recurring theme of history, if you're inclined to look at things a certain way.
Saying. And what was said was obviously not heard. Tangibility helps any pitch. We shouldn’t fault people for agreeing later versus sooner; at the end of the day, agreement trumps the alternative.
Either Overton window does not mean what I think it means, or you misunderstood me. Sure, privacy-conscious people have been warning others about a whole bunch of things, but the general discourse has shifted, so those people were frequently treated almost like conspiracy theorists.
Besides, I'm not only talking about privacy, I'm talking about general loss of trustworthiness that comes from moving away from the "confederate Internet" towards the "feudal Internet". Back in the POP3 days, if the message got delivered to your mail client, the only way it got deleted would be because of something that you did (even if it's "fail to have backups when your machine caught fire"). Nowadays you have people acting like Facebook deleting a message from your "inbox" (not an e-mail inbox, but same terminology is being used) is not a big deal.
The rationale is: "Well of course they can do that, it's their server and their service." And that's precisely my point.
Building products is all about finding the right balances...an engineers mind often wants to expose all possible options to users, just because they can.
But I have never seen that work with users at scale...you need to curate, only give people the controls they absolutely need
Facebook, however, has a massive codebase. Their app is easily one of the biggest Android apps, in fact it's so huge they repeatedly (?) had to hack the Android runtime environment to allow it to run.
Their backend and frontend code is likely also a huge monster. In addition they're doing ML/AI stuff which is even harder to debug.
> You’re implying their ML/AI interefered with Mark’/ dislike button?
I don't imply it, but given how many spam filters these days are realized as more-or-less-black boxes, it is a plausible theory.
In addition, at least for Twitter and FB, there has been suspicion for long time that reports/blocks have faster consequences when many people report/block, as automated filters take over before a human can look at it. I can imagine that there is a filter that gets triggered when a user gets N reports/blocks in M minutes, with e.g. N=100 and M=30, to automatically flag users as potential spammers, unsolicited nude senders or whatever. This had hit Zuckerbergs account, the spam filter blocked his account... and then someone was tasked with preventing this, by disabling the block function based upon certain criteria.
Also, the bug seems to have affected only users blocked by thousands of people. Not exactly your usual testcase, and I don't know if I'd have thought of this scenario when writing the test specifications.
You are assuming that there are test specifications - in many organizations thorough testing before shipping is a thing of the past. A/B and testing in production is the new norm unfortunately. That's probably part of the problem. Since the users are the product being tested it makes even sense.
Ha! That is actually the kind Of technical quirk I can believe. Maybe because I so rarely use the block feature myself that I assume others must also unused it very rarely, and that few users are ever blocked (since the previous paradigm was opt-in following).
It seems like FB employees do not need a particularly good reason to go play around with admin tools editing user accounts. If they can edit users accounts this lightly, what does it take for them to look through you private posts or messages?
Your screenshots show no evidence of "facebook staff" doing anything. Looks to me like an automated notification triggered by a username change on your account. I don't use Instagram so maybe I'm missing something, but you should have done as it said and changed your password after clicking "revert this change".
>Your screenshots show no evidence of "facebook staff" doing anything.
Are you suggesting that Mat Henley from Uber hacked me (or hired someone to hack me?), or that I myself gave up the name and am telling lies now? I think both of those sound tremendously unlikely.
I am 100% sure that neither of these accounts were compromised.
>but you should have done as it said and changed your password after clicking "revert this change".
I did, that's how I found out that the button does not actually revert the change, it just takes you to a prompt which makes you reset your password.
I'm not suggesting anything. All I'm going off is the screenshots you provided which you described as "Screenshots of Facebook staff forcefully resetting the account names".
I don't know what you're suggesting either... that Henley picked up phone to Instagram and said "I want @Mat, give it to me"?? Maybe ask Henley directly, send him a message and ask for your name back. If I were you I'd be pissed off and wouldn't let it slide.
I got Mat Henleys cell number from a friend and contacted him via Signal very shortly after this happened. A day later I got a reply from him:
“knock it off, Adrian”
I don’t know any Adrians, he didn’t reply after that.
Then it looks like it was a mistake in the name registration process, because @Mat is listed in Google's cache going back to mid last year. You shouldn't have been able to register that name.
Whoa!!! This sounds nefarious and also highly problematic. If employees can arbitrarily change names, how can anyone trust that they wouldn’t be able to change entire conversations, posts, media, etc., and shame someone (at best) or implicate someone as involved in a heinous crime or plotting one? It’s so bizarre if these are possible and have no oversight, policies and punishments.
Facebook exists for one reason only, Mark's desire to be rich. Everything else is completely opportunistic. If you see it through that lens, everything they've ever done makes complete sense.
If that was the case, he'd find a replacement CEO, sell his shares and be done with it. He's already rich enough to be have to lift another finger in the next 10 lifetimes.
I made it from few thousands per month of salary to my first sell for $1MM in 2004. I thought I was rich. Then I start meeting people much richer. My last exit was with $17MM check. Guess what - I still feel I'm poor.
The point being, rich people don't look down - they look higher at people who are richER or wealthy (few people in America are such as Buffet, Gates, etc). I assure you Zuck doesn't sell his stock to be rich and "done with it"; he's looking at Bezos fortune and that's where he's aiming at. Greed never stops and there is always something/someone bigger to look at.
Fun fact: I know a buddy in Bezos close circles and he told me Bezos is not looking up at anyone but keeps growing like crazy NOT TO lose his position. So there you have it; always a reason to get up just another morning and make just another buck.
> Bezos is not looking up at anyone but keeps growing like crazy NOT TO lose his position
And what are the consequences if he does lose his position? He still has a literally incredible amount of power. Is the drive to be at the top of the heap a necessary trait of amassing wealth?
The way you write about competition amongst the rich makes it look like a pissing contest in a community pool.
I don't understand your question. What are consequences of losing a game of chess? Or anything in that matter...
> The way you write about competition amongst the rich makes it look like a pissing contest in a community pool.
Bingo! IT really is, and one day when you have a chance spending some time with wealthy individuals, maybe at the party when they drink and open up, you will see for yourself how shallow and simple the game really is.
Thank you! I feel like I'm watching this trainwreck in slow motion by myself, as we used to inch - but now barrell - towards regulating the internet into oblivion. This is ironic, of course, because the internet is meant to be decentralized and deregulated, and the very people pushing for regulation are the same ones that oppose legislation in other aspects (e.g. encryption). It's a very short hop from wanting Facebook to comply with some law, to compelling Facebook to turn over user data to verify the law has not been broken.
The basic endpoint of Facebook and Google whatever their stated objectives is to build detailed long term profiles of every individual, like the stasi or any totalitarian state.
This appetite for user data is so great simply tracking location and content across devices is not enough. Here is Facebook trying to do deals with hospitals to access all their patient data [1]
It's easy to see where this goes and how encompassing it becomes. People in the ad ecosystem will hand wave, diminish and deny but that's just self interest. For wider society the effect is sinister.
This is another example of Facebook valuing its own privacy, while at the same time continuing to disregard the privacy needs for its users. I always gave them the benefit of the doubt, but it's getting ridiculous.
This seems like FB expects to get hit with a discovery process and injunction against deleting anything. It's housecleaning before they're forced to open the curtains.
The continuing saga of people's realizations that raccoons have been digging through the collated data of 2 billion "dumb fucks" as Zuckerberg would call them.
The folks in Myanmar apparently have taken issue with how Zuckerberg referred to FB's "tools" and "systems" for detection of hate speech in his Vox interview. They sent him a letter:
Blaming Facebook for not stopping chain letters spread on messenger is disingenuous, especially in the context of the privacy debate.
If you believe people have the right to private communication, there's nothing a technology company can do to fix social problems like this, you need social measures, international pressure and so on.
Delete buttons cut both ways. You can yank content but so can others. Of course you don't have assurance it's been truly deleted and someone could save a copy. Even before I quit it was hard to tell if the submitter wanted it gone or if it was a technical glitch, censorship, or displaced by ads.
If you want to be cynical it's kind of like the memory hole in 1984. More charitably, it's not that different from broken links (except that broken links leave a trace).
The article specifies that messages older than 2014 were deleted. I read once (can't find the resource now) that Microsoft deletes all chat and email messages older than a certain date.
I think the reason is legal. If ever subpoenaed, providing too much chat info could be potentially damaging. I feel like the insinuation is that something specifically nefarious is going on. They can delete private messages in their own chat program for their executives as much as they want to. I also don't understand why this is considered bad behavior or why people somehow think something nefarious had to occur for them to want to do so. It may just be standard practice at many giant corporations.
1) Users are explicitly not allowed to do this. You can delete things in your inbox, you can’t delete things in other people’s inboxes.
2) It convienely starts happenening right when FB finds themselves in a shitshow, and with a CEO that famously said, “They trust me — dumb fucks”. There is no evidence that this is anything except something very very new.
3) Document retention policies only work for your company. Not someone else’s. I have no right to go into your documents and shred things I no longer want you to have.
I think it's clear from nearly a decade's worth of Facebook's trysts with privacy issues that it is a company that doesn't have a fundamental stance that's in favor of user privacy. Over and over again it has either intentionally built features that disregard privacy, or put in wrong defaults, or simply brushed off criticisms when others have misused Facebook. At this point we should be asking ourselves why we expect Facebook to change on its own.
There isn't a good alternative to Facebook (given the network effect and the legitimate features that Facebook provides), but it's probably important that there be public oversight of Facebook (and any other social media company that becomes large and powerful as Facebook).
We should also be asking ourselves whether an organization that has the power to be so influential in the destinies of nations should be run by executives who see every issue as a PR issue to be dealt with, rather than ones that should teach them fundamental lessons. If the leadership is so reluctant to tackle these issues head-on, maybe they need to be replaced with a humbler set of leaders.
I agree that FB has shown time and time again that they consider user privacy a mere road bump. Pervasive dark patterns. I disagree that specific regulatory oversight is needed. Advertising supported websites will always remain loyal to their customers: advertisers. Users are not willing to pay money and requiring a cover charge hurts adoption. What used to be called spyware is now "analytics." People proudly proclaim targeted advertising as a benefit for users. If we want privacy to be respected we need a right to privacy law. It's hard to be a sustainable business if everyone else is racing to the bottom. I'm waiting on FB's marketshare to tank.
> I disagree that specific regulatory oversight is needed. - and - > If we want privacy to be respected we need a right to privacy law.
You're almost disagreeing with yourself unless you see a significant difference between regulatory oversight and privacy laws. Both reflect government authority over Facebook (etc.) to protect its citizens.
Myself, I would be happy with either option. We have neither now.
I don't always want privacy though. Is privacy at odds with accountability?
We were talking about how disclosing salary and compensation information for everyone would help everyone in the long run and some people objected saying they want privacy.
We have a lot of information that some people know but most do not. We have to think carefully what is private and what isn't. For example, your employer knows how much they pay you. I say it isn't private.
There are varying degrees of privacy. Just because you share some information with some people doesn't mean you want to share all information with everyone.
That said, I agree that opaque salaries hurt only the employee as the other side of the market (employer) has full knowledge.
> We were talking about how disclosing salary and compensation information for everyone would help everyone in the long run and some people objected saying they want privacy.
> For example, your employer knows how much they pay you. I say it isn't private.
It's also likely that your employer has shared your salary with other employers, so when you interview for a new job they know how much you're currently making without having to ask. For an example, see:
https://en.wikipedia.org/wiki/The_Work_Number
http://acceptance.theworknumber.com/Employees/DataReport/
I requested my report recently, and it had the last 5 years of my salary history, updated monthly.
Salary data isn't private, because everyone who can use it against you already has access.
To be clearer I was contrasting regulations on just Facebook or just social media companies and an informed consent opt-in privacy law covering all personal information. Sector specific privacy laws already prohibit your phone company and bank from using your logs for targeted advertising or directly selling it to the highest bidder. When there is a criminal or civil liability to hoarding personal information that can be abused internally or leaked, companies will think twice before setting up such databases. Of course there will still be the Ubers and Equifaxes of the world but it will shift the burden to the data brokers who will no longer be able to fall back to noting that compiling scarily detailed dossiers is legal.
I think you answered your own question. The reason people expected Facebook to change is because people don't seem to understand that the general rule is that when a large company makes a statement it is invariably based around public relations. They will say whatever they think it is that will receive the most positive response from the population, regardless of the truthfulness of that statement, at least so long as it cannot open them up to legal counter action.
Facebook's behavior is not going to change with a humbler set of leaders. Their entire fundamental business model is based around collecting sensitive information. And they're starting to reach the absolute extremes of market saturation meaning that the only way they can continue to grow is to offer more effective advertising which relies on harvesting even more data from their users. If you want a social media service that respects users' privacy and rights then it would need to be either a non-profit decentralized service, or a for-payment service. The former would suffer from lacking the funds to reach a critical mass, and the latter would fail because nobody's going to pay to join a small social media service - let alone the subscription model that would be necessary.
All I need is a better chat app, that Facebook won't buy. I can now text, voice record, send photos/selfies and video messages easily to my social circles. Connecting to second degree connections is fast and everyone I know has an account.
I want to leave Facebook but I can't give up such an amazing chat experience that keeps me connected to my family and close friends. Even if I got my immediate family to use a different app, I'd have to keep Facebook for old friends :( .
(edit, spelling)
Only using whatsapp without other facebook accounts should still be fine. Whatsapp has end2end encryption (unlike FB messenger) and if you don't use other FB products they can't use your data for ads. Obviously not as good as an independent messenger but will cause the same revenue loss for FB.
I believe that’s only text. Try sending someone a video. Takes a while. Try forwarding that video to someone else moments later. Happens pretty quickly.
Same applies with photos in my experience. I’d assume voice notes,location pins, etc are also subject to this as well.
WhatsApp reduces the quality of your media and saves it in a second folder of your gallery.
If you forward already converted media it will be taken from the gallery in it's compressed, substantially smaller version.
On iPhones it has a splash screen for compression and then a separate spinner in the chat for the upload progress for videos. That second upload spinner doesn’t occur for immediately forwarded videos.
That is interesting and different from what I have experienced on Android. I've no explanation for that.
It's all encrypted. Encryption in group chats is a bit weaker to provide better usability (there was an article recently I believe) but messages itself are encrypted. And since we're talking here about usage of data for ad tracking, any encryption is likely good enough.
They can of course still track meta data, e.g. tailor your ads based on interest of contacts. But if you only use whatsapp (ad free), they own't make any money with it.
> if you don't use other FB products they can't use your data for ads
They're still siphoning their data. Just refuse WhatsApp access to your phone's address book, and see how it outright punishes you for doing that. Most of my WhatsApp contacts maintain their own names in their profiles, yet WhatsApp stubbornly shows their phone number as contact name. I usually identify contacts by looking at the profile picture (or by the conversation history if they changed that recently). Then again, I only have 3-4 contacts on WhatsApp. I would be less willing to put up with this if there were dozens of active contacts in there.
> that Facebook won't buy
Facebook will buy because it’s profitable to buy. Monopoly power is a known mode of market failure. (Note: you don’t need 100% of a market for the deleterious effects of monopoly power to manifest.)
The only solution is blocking future acquisitions and breaking up Facebook.
Facebook won't buy IRC or any distributed open source decentralized protocol. Chats, newsgroups, email... The net used to be a lot more open and decentralized that it is now, maybe it's time to go back a couple of decades?
Won't they do with decentralized chat what they did with XMPP?: integrate support for a while to keep users from getting away and then drop that support again when the time is right?
Like...Slack.
How about: https://telegram.org/
How about we stop suggesting a messaging app that styles itself as privacy-oriented and encrypted despite not being end to end encrypted by default, and using questionable encryption even for the optional E2E functionality that nobody uses?
E2E encryption enabled by default wasn't in the OP request. But instead of being snarky, how about you suggest a better alternative?
Signal.
I'm pretty sure what we all need is a better chat protocol, and multiple apps that implement it.
No telegram with several hundred million users?
> We should also be asking ourselves whether an organization that has the power to be so influential in the destinies of nations should be run by executives who see every issue as a PR issue to be dealt with, rather than ones that should teach them fundamental lessons.
That's an extremely insightful statement but unfortunately also describes the DNC and RNC, which are already in control of the US.
https://vimeo.com/263108265#t=3513s
Tangential sidenote on who is actually in control. This 5 minute snippet from the new documentary 'Do You Trust This Computer?' presents an interesting perspective on how easily FB can be used to influence entire nations. Poses an interesting question of who is driving our electorate.
Seems like if FB acting in it's own best interest to promote user engagement, then it'll be very hard for them to prevent bad actors from weaponizing these very algorithms to influence large swaths of people.
The DNC/RNC really only matter for Presidential politics. The State and county committees are what matter for lower elections, including House and Senate elections.
I only count two seats which are not either (D) or (R). I’d say the DNC and RNC still matter quite a bit for these elections.
https://www.dailykos.com/stories/2017/4/17/1653154/-DNC-DCCC...
The DCCC handles House elections and is nothing to do with the DNC. DSCC is the same for the Senate.
I would note that the DCCC and DSCC primarily support (or don't) Democratic candidates after the primary. Unlike the DNC, they do not usually play a role in candidate selection. They also don't really try to influence policy; they usually support candidates where they believe that their extra help will make a difference, and provide less support in extremely safe or totally dead races, and not based on how "good" of a Democrat the candidate is.
Yes, as citizens we should do as much as possible to diminish their power.
I've posted lots of comments about removing parties from the ballot process. Let them endorse a candidate, whatever, but private organizations shouldn't have any direct role in determining which names appear on ballots.
My personal take:
We the people should stand up and say, "I will never again vote for someone who doesn't run their campaign solely on public funding."
Who gets public funding? Anyone with X signatures, where X is relatively small.
Having our politicians spend almost all of their time fund-raising, and designing their campaigns and message around improving their fund-raising, is just destroying us.
This is NOT really that big of an issue. We make it this way because USA does not have any serious regulations on data privacy. In most European countries if someone offers your info, such as DOB, credit card balance, property ownership, a prosecutor knocks on his doors with handcuffs, even if temporary. And if you ran major bulk operation, you go behind bars for many years.
No scandal in USA will make any difference if we don't implement draconian laws. Looks what's going on around GDPR; it's not here YET meanwhile everyone knows about it and every email provider shakes his pants. Every company I do business with already send me some sort of acknowledgment or their own policy to accommodate GDPR and these are not even European companies!
I am all for small government, but we clearly need a brand new Department that will regulate such behemoths like Youtube (which has power to squash your first amendment) and Google/Facebook et al that continuously makes money off of their immoral behavior with your information.
I'm no fan of Facebook and deleted my account years ago, but the flood of articles like these that are barely embarrassing since the major Cambridge Analytic embarrassment feels more like an agenda or piling on than newsworthy items.
There are many reasons to reduce or eliminate your time on FB, but this isn't one of them.
That’s because you misunderstand the intent of this pile-on. The intent is to ensure full compliance in 2020 elections.
Using bad arguments is rarely an effective way to get what you want.
In business. In politics it works.
> The intent is to ensure full compliance in 2020 elections.
I don't see how these actions help Facebook be compliant in any way.
I think he means "towing the (establishment/party) line", not "complying with regulations".
That’s exactly right.
The behaviour described in this article implies a culture willing to look one way for users and another way for themselves. That’s pertinent less to users and more to lawyers and lawmakers.
I dislike FB as much as the next dev here, but surely it's a widely accepted (and beneficial) practice that employees have special privileges in the apps they build. Imagine using this argument to say that Facebook couldn't shut down a page/group/account.
Yes, it is technically the same abilities being used. The reaction comes because of how they're used: to indiscriminately delete old messages that may potentially be harmful to themselves while not offering the same options to its users.
On the contrary, everyone's willingness to accept this kind of behavior as normal indicates how far the Overton window has moved when it comes to the trustworthiness of our digital services. We need a collective effort to shift things back where they belong.
I disagree, privacy-conscious people have been warning that Facebook has been doing these things since forever, so people (especially here on HN) can't claim that they didn't know these things were going on. This is just the same old story: people being upset about the consequences of what they were warned about many many times.
I, too, wish for a better quality of collective decision-making on the part of humanity, but this seems to be how "we" do these things. "What, me worry?" is probably the central recurring theme of history, if you're inclined to look at things a certain way.
We also used to accept that negative reinforcement was part of this learning process, but now it's called victim blaming.
> privacy-conscious people have been warning
Saying. And what was said was obviously not heard. Tangibility helps any pitch. We shouldn’t fault people for agreeing later versus sooner; at the end of the day, agreement trumps the alternative.
I feel like you're both right though. We can't claim we didn't know, but the fact that we accept this en masse is concerning.
Either Overton window does not mean what I think it means, or you misunderstood me. Sure, privacy-conscious people have been warning others about a whole bunch of things, but the general discourse has shifted, so those people were frequently treated almost like conspiracy theorists.
Besides, I'm not only talking about privacy, I'm talking about general loss of trustworthiness that comes from moving away from the "confederate Internet" towards the "feudal Internet". Back in the POP3 days, if the message got delivered to your mail client, the only way it got deleted would be because of something that you did (even if it's "fail to have backups when your machine caught fire"). Nowadays you have people acting like Facebook deleting a message from your "inbox" (not an e-mail inbox, but same terminology is being used) is not a big deal.
The rationale is: "Well of course they can do that, it's their server and their service." And that's precisely my point.
Hardly, if we cannot retract our messages from Facebook, neither should they!
I disagree, this shows a strong pattern of "do as I say, not as I do" in term of privacy. And this is a prime example.
> Zuckerberg’s profile doesn’t show a button to add him as a friend on desktop, and the button is grayed out and disabled on mobile.
Of course he has special rules for himself. Fun fact: you cannot block Mark Zuckerberg on Facebook. It just won't let you.
Everyone can controls who can send them friend requests.
Most users can only set it as restrictive as "friends of friends", but if you have 10k+ followers or are verified then you can restrict it further.
Which frankly makes a lot of sense!
Building products is all about finding the right balances...an engineers mind often wants to expose all possible options to users, just because they can.
But I have never seen that work with users at scale...you need to curate, only give people the controls they absolutely need
> Fun fact: you cannot block Mark Zuckerberg on Facebook. It just won't let you.
This was fixed months ago (and was a technical problem that affected anyone blocked by many thousands of people, not just him). It wasn't intended.
>> It wasn't intended.
We have been hearing the "it's a bug" a lot lately from Facebook.
Agreed. I think they've pretty much used up their plausible deniability quota for the year, or decade...
Facebook, however, has a massive codebase. Their app is easily one of the biggest Android apps, in fact it's so huge they repeatedly (?) had to hack the Android runtime environment to allow it to run.
Their backend and frontend code is likely also a huge monster. In addition they're doing ML/AI stuff which is even harder to debug.
You’re implying their ML/AI interefered with Mark’/ dislike button? Bit of a stretch.
> You’re implying their ML/AI interefered with Mark’/ dislike button?
I don't imply it, but given how many spam filters these days are realized as more-or-less-black boxes, it is a plausible theory.
In addition, at least for Twitter and FB, there has been suspicion for long time that reports/blocks have faster consequences when many people report/block, as automated filters take over before a human can look at it. I can imagine that there is a filter that gets triggered when a user gets N reports/blocks in M minutes, with e.g. N=100 and M=30, to automatically flag users as potential spammers, unsolicited nude senders or whatever. This had hit Zuckerbergs account, the spam filter blocked his account... and then someone was tasked with preventing this, by disabling the block function based upon certain criteria.
Also, the bug seems to have affected only users blocked by thousands of people. Not exactly your usual testcase, and I don't know if I'd have thought of this scenario when writing the test specifications.
You are assuming that there are test specifications - in many organizations thorough testing before shipping is a thing of the past. A/B and testing in production is the new norm unfortunately. That's probably part of the problem. Since the users are the product being tested it makes even sense.
Ha! That is actually the kind Of technical quirk I can believe. Maybe because I so rarely use the block feature myself that I assume others must also unused it very rarely, and that few users are ever blocked (since the previous paradigm was opt-in following).
Back when I started using social media, the founder of the site wanted to be everyone’s friend.
There was a special South Park episode for that, Zuckerberg shouting "you can't block me" to everyone.
I recently saw the name @Mat available on instagram, I registered it. Twice.
Both times the name was taken from me by a Facebook employee and eventually given to this Mat Henley guy from Uber.
Screenshots of Facebook staff forcefully resetting the account names:
https://i.succ.in/WuwuV1IH.png https://i.succ.in/WuMMtKTr.png (The revert button does nothing, just offers to change your password)
It seems like FB employees do not need a particularly good reason to go play around with admin tools editing user accounts. If they can edit users accounts this lightly, what does it take for them to look through you private posts or messages?
This needs to be made more of an issue.
Your screenshots show no evidence of "facebook staff" doing anything. Looks to me like an automated notification triggered by a username change on your account. I don't use Instagram so maybe I'm missing something, but you should have done as it said and changed your password after clicking "revert this change".
>Your screenshots show no evidence of "facebook staff" doing anything.
Are you suggesting that Mat Henley from Uber hacked me (or hired someone to hack me?), or that I myself gave up the name and am telling lies now? I think both of those sound tremendously unlikely.
I am 100% sure that neither of these accounts were compromised.
>but you should have done as it said and changed your password after clicking "revert this change".
I did, that's how I found out that the button does not actually revert the change, it just takes you to a prompt which makes you reset your password.
I'm not suggesting anything. All I'm going off is the screenshots you provided which you described as "Screenshots of Facebook staff forcefully resetting the account names".
I don't know what you're suggesting either... that Henley picked up phone to Instagram and said "I want @Mat, give it to me"?? Maybe ask Henley directly, send him a message and ask for your name back. If I were you I'd be pissed off and wouldn't let it slide.
I got Mat Henleys cell number from a friend and contacted him via Signal very shortly after this happened. A day later I got a reply from him: “knock it off, Adrian”
I don’t know any Adrians, he didn’t reply after that.
Especially when we know Mat Henley is a former Facebook employee.
Hey, I'm a tech reporter and I'm interested in looking into this further. Can you email me, or perhaps provide contact info? My email is smann@inc.com
I've sent you an email, could go to spam, not sure.
Thanks, I'll check!
But _________mat____ is so much catchier.
How recently?
Around 20:00 (PDT) on Wednesday, April 04 2018.
Then it looks like it was a mistake in the name registration process, because @Mat is listed in Google's cache going back to mid last year. You shouldn't have been able to register that name.
Whoa!!! This sounds nefarious and also highly problematic. If employees can arbitrarily change names, how can anyone trust that they wouldn’t be able to change entire conversations, posts, media, etc., and shame someone (at best) or implicate someone as involved in a heinous crime or plotting one? It’s so bizarre if these are possible and have no oversight, policies and punishments.
Facebook exists for one reason only, Mark's desire to be rich. Everything else is completely opportunistic. If you see it through that lens, everything they've ever done makes complete sense.
If that was the case, he'd find a replacement CEO, sell his shares and be done with it. He's already rich enough to be have to lift another finger in the next 10 lifetimes.
I made it from few thousands per month of salary to my first sell for $1MM in 2004. I thought I was rich. Then I start meeting people much richer. My last exit was with $17MM check. Guess what - I still feel I'm poor.
The point being, rich people don't look down - they look higher at people who are richER or wealthy (few people in America are such as Buffet, Gates, etc). I assure you Zuck doesn't sell his stock to be rich and "done with it"; he's looking at Bezos fortune and that's where he's aiming at. Greed never stops and there is always something/someone bigger to look at.
Fun fact: I know a buddy in Bezos close circles and he told me Bezos is not looking up at anyone but keeps growing like crazy NOT TO lose his position. So there you have it; always a reason to get up just another morning and make just another buck.
Not everyone that has money thinks this way. It's a special kind of freedom to be content with more than you need and not have wants.
> Bezos is not looking up at anyone but keeps growing like crazy NOT TO lose his position
And what are the consequences if he does lose his position? He still has a literally incredible amount of power. Is the drive to be at the top of the heap a necessary trait of amassing wealth?
The way you write about competition amongst the rich makes it look like a pissing contest in a community pool.
I don't understand your question. What are consequences of losing a game of chess? Or anything in that matter...
> The way you write about competition amongst the rich makes it look like a pissing contest in a community pool.
Bingo! IT really is, and one day when you have a chance spending some time with wealthy individuals, maybe at the party when they drink and open up, you will see for yourself how shallow and simple the game really is.
this can apply to literally any company
The problem with any regulation curtailing Facebook is it's practically guaranteed to hurt Internet freedoms. A lot.
Thank you! I feel like I'm watching this trainwreck in slow motion by myself, as we used to inch - but now barrell - towards regulating the internet into oblivion. This is ironic, of course, because the internet is meant to be decentralized and deregulated, and the very people pushing for regulation are the same ones that oppose legislation in other aspects (e.g. encryption). It's a very short hop from wanting Facebook to comply with some law, to compelling Facebook to turn over user data to verify the law has not been broken.
Regulating what corporations can do with your data is a bit different than regulating internet.
How? By this measure GDPR should also hurt internet, no?
It is, but in most part it is hurting spammers and shady ESPs :)
Proved again and again: https://news.ycombinator.com/item?id=16725506
The basic endpoint of Facebook and Google whatever their stated objectives is to build detailed long term profiles of every individual, like the stasi or any totalitarian state.
This appetite for user data is so great simply tracking location and content across devices is not enough. Here is Facebook trying to do deals with hospitals to access all their patient data [1]
It's easy to see where this goes and how encompassing it becomes. People in the ad ecosystem will hand wave, diminish and deny but that's just self interest. For wider society the effect is sinister.
[1] https://www.cnbc.com/2018/04/05/facebook-building-8-explored...
This is another example of Facebook valuing its own privacy, while at the same time continuing to disregard the privacy needs for its users. I always gave them the benefit of the doubt, but it's getting ridiculous.
"Do as I say and not as I do"
-Facebook Privacy Policy, circa 2018
Exactly. There are rules for the rulers, and different rules for the serfs.
Watch "The Circle"!! It is the famous moment when Tom Hanks says "oh we are so ....ed". And Tom Hanks can deliver a line!
Secrets are Lies, but their secrets are better than your secrets. I am surprised that FB still hasn't lost 30% of their users by now.
After 20 minutes I couldn't stand it any longer. Bad movie.
This seems like FB expects to get hit with a discovery process and injunction against deleting anything. It's housecleaning before they're forced to open the curtains.
I now wonder whether it's worth it to join Facebook just to delete my personal data and just quit afterwards?
I wouldnt even ashamed to admit this in future interviews.
The continuing saga of people's realizations that raccoons have been digging through the collated data of 2 billion "dumb fucks" as Zuckerberg would call them.
"Nothing to hide"
Apparently Sandberg went on a press tour giving five or so interviews, apologising according to script in each one.
But apparently she veered a bit off script in one of them, on the Today show.
She said "our service depends on your data."
Apparently she said something like "If they want to opt out of sharing all their data, they will have to pay for it."
Source:
https://www.cbsnews.com/news/sheryl-sandberg-sticks-to-scrip...
Wondering what form of payment Sandberg wants. It's not like they give us the option to pay in money.
The folks in Myanmar apparently have taken issue with how Zuckerberg referred to FB's "tools" and "systems" for detection of hate speech in his Vox interview. They sent him a letter:
https://www.nytimes.com/2018/04/05/technology/zuckerberg-fac...
Blaming Facebook for not stopping chain letters spread on messenger is disingenuous, especially in the context of the privacy debate.
If you believe people have the right to private communication, there's nothing a technology company can do to fix social problems like this, you need social measures, international pressure and so on.
Send your old comms with Zuckerberg to the FTC for safekeeping.
The three sources asked to remain anonymous for fear of "angering Zuckerberg".
But they still wanted to share their story with TC.
Mike Myers (Dr Evil) thinks Zuckerberg is more hated than Donald Trump.
https://www.smh.com.au/entertainment/tv-and-radio/mike-myers...
Thing is a nothingburger. Do people really believe messages can’t be deleted no matter what?
It’s difficult to retract a message SMTP’d to my server then POP3 download to my desktop computer
Sometimes the old protocols are the best
Delete buttons cut both ways. You can yank content but so can others. Of course you don't have assurance it's been truly deleted and someone could save a copy. Even before I quit it was hard to tell if the submitter wanted it gone or if it was a technical glitch, censorship, or displaced by ads.
If you want to be cynical it's kind of like the memory hole in 1984. More charitably, it's not that different from broken links (except that broken links leave a trace).
"the cloud is just someone else's computer"
The article specifies that messages older than 2014 were deleted. I read once (can't find the resource now) that Microsoft deletes all chat and email messages older than a certain date.
I think the reason is legal. If ever subpoenaed, providing too much chat info could be potentially damaging. I feel like the insinuation is that something specifically nefarious is going on. They can delete private messages in their own chat program for their executives as much as they want to. I also don't understand why this is considered bad behavior or why people somehow think something nefarious had to occur for them to want to do so. It may just be standard practice at many giant corporations.
1) Users are explicitly not allowed to do this. You can delete things in your inbox, you can’t delete things in other people’s inboxes.
2) It convienely starts happenening right when FB finds themselves in a shitshow, and with a CEO that famously said, “They trust me — dumb fucks”. There is no evidence that this is anything except something very very new.
3) Document retention policies only work for your company. Not someone else’s. I have no right to go into your documents and shred things I no longer want you to have.
Those are not bad points, but is there any evidence that this happened recently? I didn’t find that part in the article.
I certainly took the article to be implying that this was new behavior, but now seems to have been something they secretly started back in 2014 or so.
https://www.theverge.com/2018/4/6/17203114/facebook-mark-zuc...