Are Google and Facebook Evil?

Doug Black on September 11, 2018

Just a question I've been pondering lately. I'm working on a side hustle called Engauge Analytics, something like Google Analytics with a machine l... [Read Full]
markdown guide
 

They're as evil as food companies marketing processed crap that makes us unhealthy and fat. It's the natural result of a free market economy where individual responsibility reigns supreme. Each of us has the choice to not use any of these products, but the products themselves are intentionally designed to exploit our human weaknesses.

Google, Facebook, et. al are just being good capitalists and exploiting a market. In some cases, they're inventing markets out of whole cloth by creating desires where there might otherwise be none.

I generally think this is bad, but I wouldn't say it's evil. Also, the ball is totally in our court to become more aware of these practices and how they affect us. We can decide to purchase other products with our attention, and if we do so, the free market will naturally adjust itself to our new consumer preferences.

We can also decide not to invest in companies that use this sort of emotional manipulation and guide the 'ol free market ship that way too.

My point being that none of this stuff exists in a vacuum. We're the ones who hand over our data and our attention. We can better educate ourselves and just stop doing that if we're smart and strong willed enough.

 

The counterargument is that these tools are necessary in this day and age to participate in public discourse. Market failure when people don't have choice.

 

Are they though? I'm perfectly content discussing things in person with my friends. Let's not give these companies more credit than is due. They made a fun addictive toy and now we're all having a come to Jesus moment about the implications of said toy.

Headline from earlier this year: Federal judge rules Trump's Twitter account is a public forum.

The question has also been posed pretty frequently lately in relation to concerns that have arose over partisan censorship.

Well sure, it's a public forum. That's fine. We've shown that something as weak as Twitter can be considered sufficient enough to be a public forum. But it doesn't show that Twitter or Facebook or any of these toys is necessary for public discourse.

We did just fine for thousands of years without them. If they go away, I doubt many will miss them.

"The president now uses Twitter... everybody uses Twitter... All 50 governors, all 100 senators, every member of the House has a Twitter account. So this has become a... crucially important channel of political communication." - Supreme Court Justice Elena Kagan

Granted, that was a case about whether North Carolina could bar sex offenders from social media, but the implications are pretty clear.

 

Evil may not be the correct term. I don't believe they mean any malicious intent with the data. Negligent? Likely. Should something be done to regulate the Big Data industry? Absolutely.

But at the same time, it would be naive to think we can have all the facilities (Facebook, Google, Twitter, etc) for free. The tradeoff is our data being sold to people beyond our control.

 

Google probably earned the term "Evil" by using it themselves so much in the early days. Not that they ever wanted the "don't be evil" thing to be a public slogan in the first place, I feel like that's why they get called that more often these days.

 

Good call to both of you. I agree, Google used "evil" and now it's kinda stuck.

 

First and foremost, something should be done about the "regulating things" industry. It's the uncontested biggest evil on Earth.

 

Are Google and Facebook and the like that use our personal data for their own good evil?

I find this question is really hard to answer.

On the surface it seems a very clear question.If you read it superficially it boils down to something like:

»Are others evil if they use ours for their own good«

Something, which makes your head nod. Of course if others use some thing from us and make money from that thing that is evil, because this is ours, we should have some form of compensation.

Or you could read it in another way:

»Is someone evil, when he/she tells other people things, I do not want shared?«

This question opens a dark abyss where even darker questions are hiding.

Speaking of "our data":

  • Who owns "data"?
    If you tell me, that you are going to buy a parrot. Do you own this information? Do you still own it, after you told me? Do I own it?

  • What is "evil"?
    And what about, if I tell somebody else that you are buying a parrot and he offers you food for the parrot, because he heard from me, that you are a big fan of parrots. Is that evil? Did I "steal" the information? What about, if I told you upfront, that I have a good friend of mine who I am often talking to and would of course tell, that you are going to buy a parrot. Was that "evil"?

Google and Facebook are similar in that respect, that they both try to sell ads in the one way or the other; which is from my point of view not "evil", at least not per se.

That doesn't mean, I have am a strong advocate for - but also not against - those business models and platforms.

Honestly, I currently do not know what exactly to make out of it. I have to think more about it. The only thing, I can say is, that it is hard to judge because clear terms are currently missing. We humans are used to think in terms of physical things. Transposing concepts from the physical world like "ownership" into the virtual is not as easy as it first seems.

Disclaimer: No parrots were harmed during writing this post.

 

What if instead of you telling me you're buying a parrot, I as the owner of the mall use the security cameras to track the fact that you went into the pet store and came out of there with a parrot?

I also have facial recognition software running so I can identify you.

Is it OK if I sell this information to a third party for ad purposes? Maybe someone pays top dollar to direct market bird seed to you.

Shady? Not shady? Do you feel comfortable or uncomfortable in this scenario?

 

Yes. Indeed a good question.

Is there a difference in "telling" people the fact, I bought a parrot or even showing them the surveillance videos? Or as a middle ground telling other people, that you have first class information and people knowing, that you are able to collect and classify the information, so it would be best, paying you for showing the right ads to the right kind of people; because who knows better than you do?

Who owns the information of the surveillance video? Is it yours as a mall owner? Obviously it is your camera. Not obvious, who owns "the fact"? Is it me, because as an agent, I produced the fact? Is the fact shared between you and me? Or do we own "different" facts?

And philosophical even more interesting:
How much power over people do you gain by knowledge about those people?

Does your knowledge about my recent visit to the pet shop give you any kind of power over me?

Say you could trick me into buying some kind of seeds, because you took advantage from the surveillance. Does the trick still work, if I know how the trick works?

I think, there are more questions at the moment than proper answers.

I am no longer continuing the parrot analogy because it is starting to fall apart, and I will continue using real world examples and information.

The short answer to your comment is this: No, there aren't more questions than proper answers.

If a user selects the "track everything about me" option when signing up/in to a service or device then all bets are off, obviously, and we aren't talking about those scenarios at all. We are talking about the scenarios where users aren't informed they are being tracked, aren't given the option to opt out, try to opt out but aren't able to, are tracked despite their best efforts not to be, or are misled about the amount of information tracked.

When a user decides to turn off location tracking, you shouldn't keep tracking their location. Providing a false opt-out mechanic is disingenuous at best or evil at worst.

If I sign in to a service I am well aware that my signing in and my login information is stored and tracked for whatever purpose, but I can be tracked simply by visiting the website (this is the analogy I made with the pet store and the mall surveillance camera). At no point in this scenario are you given information about being tracked or a choice to opt out.

There are absolutely ways for an adversary to use this information nefariously. At the end of the day it's not that big of a deal if ads are targeted to me based on my internet habits, but there are people living with death threats hanging over them for which tracking is literally a matter of life and death. The path from Google tracking my location to the wrong person being able to find out where I live is not a long and twisty path.

To add, the GDPR recently made clear that people can expect privacy on the Internet and to be fully informed about the information tracked and stored about them. And the UN Declaration of Human Rights says in Article 12 that:

No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.

Proper answers exist unless you are a Big Data/Big Internet apologist.

The good thing about artificial examples is, that they are sandboxed and easy to reason about. As I wrote in my first answer: real life is complicated and hard to reason about.

Mentioning #GDPR makes your "real world examples" not easier to reason about.

Proper answers exist unless you are a Big Data/Big Internet apologist.

What is to make of that statement? Does questioning put me automatically on one side or the other? Does thinking the arguments against big data are perhaps inconclusive make me in some way an "apologist"? I think not.

And slippery slopes like

The path from Google tracking my location to the wrong person being able to find out where I live is not a long and twisty path.

do not help.

In principle, your argument goes like this:
There is a law, which says x. Therefore x is right.

This might be the case. But it is not by necessity so.

When we are speaking of "tracking users", why should a user being asked to give consent? What is exactly the good, which is subject of the law, which is protected here?

If we speak of privacy: in what way is visiting a public website private? Or why should it be seen more private than visiting a physical shop instead of a webshop?

If you take the analogy of the post secret: the reason behind that is, that confidential information is exchanged betwen a sender and a recipient. If you send me a secret, it is not a violation of postal law.

In which way is telling advertisers your interaction on my site against your "privacy"? Why should you treat that confidential?

 

Nobody is forced to hand them any data. If you do it anyway, it is not their fault.

 

Some objections to your statement:

  • It's difficult or impossible to know what data you're "handing" them.

  • Even if you try to stop handing them data they still collect it. See: google collecting location data.

  • They use fingerprinting techniques to track you even though you aren't logging in or submitting any data to them. Nowhere in that scenario are you handing them data, they are harvesting it.

But yes, on the face of it if someone clicks "I agree that you collect data" they're on their own. But reality is more complex than you let on.

 

Wut?

I use LineageOS on my phone, duckduckgo search engine, maps.me geolocation and my own local mail service, etc.
For those who care about the privacy and understand that G should GTFO, it is extremely easy.

 

My friends and family use GMail, meaning I can't communicate with them without sending Google all that data.

 

It's your decision - you could say you blocked Gmail. Admittedly, that wouldn't be nice.

Similar to messengers: I am not reachable over WhatsApp, although my family only has that...

I'm not going to shut out my grandparents because they use GMail, and I'm not going to try to convince them to switch off GMail after already being a jerk about not having a Facebook account.

Besides, GMail's limited algorithmic sorting is pretty innocuous; it's Search and Facebook, with their pervasive filter bubbling, that're actually being a problem.

As for messengers... I've never had a problem convincing people to SMS me.

Yeah it's tough. It's the same reason I still use Facebook: Without it, I would be cut off from a large number of family members that only use FB and don't communicate in other means.

 

no - they are not evil
yes - they are dangerous
yes - there are some who use the platform with malevolent intent

as a programmer, googles algorithm is useful - it knows I want to have a particular bias to my search - but I use that with informed intent

but a very good question

 

Yes. They support laws that help entrench their monopolies and infringe on individual rights, a free market, and free speech.

They definitely mean to be doing this, it's not an accident, it's not just business as usual.

 

Interesting because this statement has certainly been on my mind recently, but I would not necessarily call them "evil". That's what they ultimately got themselves into. For better perspective on this (on the Facebook case) I recommend the medium article written by Nat Eliason

 

It is good or evil is decided by how affect with you life

For example, facebook can search user by phone number
(Actually it will show at "People you may know")

  • If it's friend, that is good thing, because i have a shortcut to add to friend
  • If it's colleague, that maybe bad thing, because i don't want see them in my private life

Sorry of my English

 

Privacy is dead, and it is not coming back.
The best we can do is ensure that information is available to all, not just accumulated in the dungeons of these companies.
And I mean truly available, not just "let the authenticated user download a zip of only their raw unprocessed inputs if they want"

 

I have two points to add:

  1. the goverment wants to handle the internet. the goverment wants to own the internet. so, when you say that we need to regulate the Data Industry, this could be a trap (MAYBE), cause the gov could say : oh ok, let us handle it for you guys (And this would be such a serious mess). So, the problem is even bigger: how to regulate the Big Data Industry without give the internet to the gov?

  2. When we say that services like google dont forced anybody to hand them any data, its the same as saying that cars dont forced nobody to left home if you had an accident.

Complex question, man.

 
 

Evil is commonly defined as morally or ethically bad. Google & FB do not hide the fact that they use your data for monetary gain. I think it's a well known fact. They are however not always super transparent about how much data they are collecting on you. I believe that in using these services we pay with our data. Once traded your data is no longer yours. It's theirs. Thus I don't think they are evil, I think the medium of payment has changed and people in general were ill equipped to understand the change. In short we should better educate everyone on how these services make money, and let the market decide. If privacy becomes more valuable to people then convenience, it will appear as a successful market solution.

 

Depends on your own morality, of course, but let's say you believe coercion and fraud are immoral. I could see one making an argument that fraud is in play, because the app users don't understand how their data is used. But contract law doesn't provide good protection for uneducated parties.

I'd say yes. Definitely evil. But not causing enough harm to warrant an intervention by the justice system. At least European lawmakers have attempted to enable avenues for justice to take hold, but it's pretty ineffective.

I guess you could also argue an oligopoly situation (coercive in nature), which would introduce all sorts of regulation and government oversight, which rarely does any good. Not to fault oversight agencies, but they're tasked with the nearly-impossible.

 

We might call a human evil if they consistently made moral choices to do evil instead of good. I don't think it necessarily makes sense to label people in this way, but we'll go with that.

Corporations don't make moral choices, they make economic ones. They might emulate human morality by engaging the morality of their human employees, but this is mostly camouflage. At best, a corporation's culture might have a vestigial morality left over from when the corporation was a small business, but it will likely be turned into a marketing tool.

 

I'd say no. Everyone acts in their own self-interest, including these companies. Doing so is not necessarily evil until it is intentionally at the expense of others. But even without intent, it is very easy to infringe on someone else's rights negligently (not considering or taking appropriate precautions) in pursuit of self-interest. It happens all day, every day, and is pretty much the whole reason behind civil court (versus criminal court). The reason this issue is raised to the level of congressional hearings is due to the scale at which it has happened.

 

I feel like users should have a bit more control over their data. However, knowing I would do exactly the same if I was at their position I can't really mark them as evil. 🙄😶

 
 

I see a lot of sympathy for the devilin here, but this one is a no brainer they are absolutely evil, they prey on human weakness and we can hardly defend ourselves. Tell me how I'm wrong.

 
 

Dude, thanks for the heads up on the form! I'll ping you when the beta is ready.

 
code of conduct - report abuse