It is well understood that elections are increasingly influenced by information technology. Thanks to the Hillary Clinton email scandal and recent fight between Apple and the FBI, the DNC hacks, this election has seen terms like “server,” “encryption,” and “cyber attack” entered debate more than any previous campaign season.  Hacks of the DNC and Clinton campaign have shown that technology is no longer a forgotten policy issue, but an active influencer in American politics.
What is not as widely recognized is how dramatically the underlying technology is enabling bad actors to tamper with this election on an unprecedented scale, not in the successful hack attempts that litter the news, but right under our nose in terms of the everyday election discourse. Organized trolls have clearly been taking to the web and social media to generate outsized Trump support, but I believe it is likely the case that this kind of manipulation is happening at a much greater scale than people realize, through bots that represent a lot of the Internet's activity. Black-hat automation is skewing the national conversation.
After the first presidential debate, #TrumpWon was the top hashtag on Twitter. The result was quickly linked to gangs of pro-Trump trolls acting in an organized manner. Given the results from actual polls, few people saw this statement to be true or important, but it still shifted the conversation, and in true trollish outcomes, had people legitimize the notion by raging against it. Â When the social web deems a topic important enough to trend, we see it diffuse into other media channels hastily.
Where attention and focus goes, so does credibility. By arguing against the notion that "Trump won", analysts lend validity to the claim. Even if one wholeheartedly disagrees with that notion, the argument has emerged as a dissenting viewpoint. But this is exactly what the proponents of such viewpoints want. Ignoring the notion would have made it all but disappear. But by allowing social media to deem the idea important, it becomes something that deserves to be talked about. Along with the hashtag, were the abundance of Internet polls following the debate that showed a clear Trump victory. Regardless of if observers are able to distinguish these polls from legitimate ones, they serve the very effective purpose of garnering attention and distracting from actual arguments, evidence and events.
Here is a tweet, where Donald Trump is purposefully deceptive by passing off the results of online polls as "final debate polls.” This is totally unethical.
Such a great honor. Final debate polls are in - and the MOVEMENT wins!#AmericaFirst #MAGA #ImWithYouhttps://t.co/3KWOl2ibaW pic.twitter.com/sfCEE3I5pF
— Donald J. Trump (@realDonaldTrump) September 27, 2016
Trump made a similar tweets after the most recent debate.
Debate polls look great - thank you!#MAGA #AmericaFirst pic.twitter.com/4peQ3Sswdz
— Donald J. Trump (@realDonaldTrump) October 10, 2016
Despite winning the second debate in a landslide (every poll), it is hard to do well when Paul Ryan and others give zero support!
— Donald J. Trump (@realDonaldTrump) October 11, 2016
Social proof is a critical element in the human decision process. This is a notion that has been tested and proven over and over again throughout the history of psychological experimentation. You have probably heard of the infamous Stanley Milgram experiments wherein subjects allowed themselves to be coaxed into brutal acts at the instruction of a so-called authority figure. In addition to authoritative influence, psychological experiments bear out the notion that people strive to conform in any setting and put a lot of stock into what seems to be popular, even if it disagrees with their own logical conclusions. The Robber's Cave experiments demonstrated this in a fascinating way, wherein some participants were willing to agree with the majority opinion even when provided with clear evidence against it. Social validation is often more important than one's own thoughts in determining which actions to undertake.
Internet discourse is important, even if it is sophomoric. Populations have long been at the mercy of propaganda that drives the dialogue. The distributed nature of the Internet means that the propaganda is coming less from central governments and singular media sources, but from semi-organized ideological hives on every side of every argument.
We can generally accept that Trump has, for a while, had a fanatical group of Internet supporters. But there is an expectation that all of this support is an indication of sweat of the brow dedication from people who share Trump's ideology. Whether you agree with the ideology should not affect whether you see the scale of his support as an acceptable reality. That is the nature of democracy. But I believe the scale has been skewed in ways that are fundamentally disruptive to our ideas of democracy. Software being distributed by relatively few people is behind a lot of this Internet activity. It is not a large devoted group of Trump supporters declaring that #TrumpWon. It is a smaller, more fanatical group, faking this activity. I am not in the privileged position to know exactly how much activity is humans clicking the retweet buttons and how much is bots, but I am justifiably convinced that autonomous activity represents far and away more of the measurable interactions happening within this election cycle than humans.
The existence of fake accounts for the purposes of generating that lucrative social proof is a well-known issue. It was proven beyond reasonable doubt that Mitt Romney's, campaign was propped up by fake social media activity. In 2014, Instagram undertook efforts to purge their service of obviously fake accounts, and some celebrities were brutally effected. Justin Bieber lost 3,538,228 followers, Akon lost more than half of his followers, and some accounts seemingly popular accounts were revealed to be almost entirely the product of fake followers. Instagram itself lost 18,880,211 of their own followers that day, which speaks to the perverse incentives at play with these companies who benefit from perceived activity. Fiverr, a website dedicated to making it easy to buy services that cost about five dollars, has offerings where you can purchase tens of thousands of real USA retweets for five dollars. Can you picture how absurd the notion of purchasing 10,000 real retweets is? Even if a real human could make one retweet per second, it would take them almost three hours to perform this action entirely impossible action. I'd like to meet the candidate who has the stamina to make 10,000 retweets. Twitter themselves at one point released a statement claiming a whopping 8.5 percent of their user base to be known bots, and their measuring tactics seemed even conservative to me. Twitter, a company that has had a hard time growing its active users of late, and may be sold to another tech giant any day, has a lot of reason to downplay the bot demographics for fear of having their stock drop even further than it has.
Malicious automation of the kind I described is as old as the web itself. The industry most dramatically affected by this is online advertising, where bot traffic continues to make up a large amount of traffic. Bob Hoffman, veteran ad executive, said to Bloomberg “Nobody knows the exact number, but probably about 50 percent of what you're spending online is being stolen from you.” Like Instagram and the celebrities who benefitted from perceived popularity boosts, there are incentives for this behavior to be deemphasized by the beneficiaries. Companies like Google, the world's leader in online advertising, do take measures to prevent fraud. They can never completely eliminate exploitation, but they can introduce barriers that make it more computationally expensive, and therefore not profitable. In politics, however, typical economics of successful fraud are not in play in the same way. The success of fraudulent automation does not depend on the profitability of the unit of deceit. Success metrics are less clear and the network of capable fraudsters with the same end goal is multiplied immensely in this circumstance.
I am not the first to discuss automation as it refers to this election, but most have focused on unearthing obvious bot activity and have not determined critical amounts of obvious bot activity. I, however, propose the notion that bot activity in this election is more obvious than ever before due to increased technical sophistication, and it is unlikely that studies cited are not capable of pinning down just how much of the activity could be automated. The National Security Agency, and perhaps Google and Facebook, have the resources available to make a realistic guess at how deeply this election is being affected by automation.
I am not in political journalism, I am a software engineer with a lot of practical insight into this stuff. When I was younger, on a few occasions I took an opportunity to influence online polls or vote-driven contests in ways that had real-world impacts. I do not wish to go into my own exploits. I will assure you that they were ideologically tasteful and never for trolling or any harassing purposes. They were impactful on a surprisingly large scale for the acts of a single teenager with no budget though. I believe it is this same behavior, multiplied greatly by political incentive and advancements in enabling technology, that is responsible for so much of what we purport to be broader human activity. In my current role, as the CTO of a startup and the founder of a side business, The Practical Dev, I have taken a professional interest in some of these subjects. With my company Argo, I focus on the scalability of communication. Our company is interested in building technology which makes leveraging the relationship between automated communication and human-to-human communication more seamless and reliable, allowing fewer people to communicate with many for purposes such as message-driven service and commerce. With The Practical Dev, I have studied the minutia of online communication in order to form a voice to reach and bring unifying content to an international diverse audience of other software developers. This sort of one-to-many communication asymmetry fascinates me. All that is to say I believe my insight into the matter is grounded in relevant experience.
The discourse around this election is being effected by algorithms in ways that run way deeper than the kind that sort human chatter -- they are also producing the human chatter. With technologies like natural language processing and convolutional neural nets, what you might refer to as specialized artificial intelligence, bots capable of seamlessly interacting on the web in sophisticated ways are totally plausible. Companies like Google, Microsoft, Amazon and Apple are investing billions specifically into the technologies that make this possible. Other organizations, like Open AI, are doing the same. And that is just in the United States, and just efforts of which we are aware. It stands to reason that our government is also undertaking these kinds of efforts, and organizations and governments in every country in the world are likely doing the same, to unknown degrees of advancement.
The advancement of these modern software ideas, focused artificial intelligence capable of beating humans in complicated board games like Go, set the stage for what is possible, but I believe it is lightyears ahead of what is needed to have meaningful effects on political dialogue. The technology needed to spread dialogue through the web does not need anything near this level of sophistication. Amplifying a message through Twitter does not exactly take a lot of conversational sophistication. Conversation between humans is scattered and random enough when humans do it that bots can seamlessly mimic this behavior and fade in with the crowd. It's genuinely difficult or impossible to differentiate between a human that shouts random Trump support on the Internet and retweets all of his tweets and a bot that takes part in that same behavior.
One interesting case that exemplifies the power that a simple conversational bot can have is a troll-the-troll bot. It was developed to hold trolly conversations with trolls in order to waste their time and frustrate them. The bot, Arguetron, is triggered by the language of angry Twitter harassers and makes simple rebuttals such as "nope. wrong, wrong, wrong." It once had an argument that lasted almost ten consecutive hours.
Scripting discourse
It does not take a complicated software architecture to spoof Internet activity. A simple script is all it takes to make an impact on an online community, in the way of a hashtag-laden tweet, a poll vote, perhaps a Reddit comment or upvote. Consider the following script, which you could open up your browser console, paste in, and it will automatically navigate to a specific tweet, vote in a poll, and then like and retweet that poll. This script, as literally demonstrated below, is not an accurate implementation, but is essentially all it takes to automate behavior.
function voteInTwitterPoll() {
window.location.href = "https://twitter.com/user/status/7830811217409";
document.getElementById("poll_choice_1").click();
document.getElementById("poll_submit").click();
document.getElementById("like_button").click();
document.getElementById("retweet_button").click();
document.getElementById("retweet_confirmation_button").click();
}
This is a six step process where the browser is instructed to navigate to a particular tweet, vote in the poll and then like and retweet it. Packaged as a browser extension, one could handily manipulate their browser to automate most of what a human could do. Any restrictions to this behavior could be manipulated at a higher level with supporting software. If implemented properly it is impossible to detect the automation.
The demonstrated script is the simplest version of this scenario. It is not particularly powerful in and of itself but it is not far off. Imagine instead of running the script once in your browser you ran it over and over again all night long. You could easily vote and retweet a poll 10,000 times. Since you can only undertake these actions once, you would need to add a part in the script where you also created a Twitter account. This would not be hard. Over and over again, you create a Twitter account, navigate to the tweet, take these actions, repeat. But Twitter has anti-spam measures that defend against this sort of thing, so maybe you add a bit of variety to your actions, perhaps you make a few random plausible tweets, add a profile picture, and like a few other tweets before voting. This will make it hard for Twitter's measures to guard against your behavior, especially because they are incentivized to remove as many barriers to your first actions as possible.
Technology giants are, as you might expect, extremely sophisticated in their abilities to detect automation, but it is a problem that runs deep because of user experience expectations and conflicting business goals. Growth and profitability often butt heads with security and safety measures. The recent damning tale of how Yahoo neglected security while desperate to spur growth is a textbook example of this. It is far too easy to argue for the business's need to streamline processes instead of securing them. Forcing new users to undertake a potentially frustrating process may be vetoed because they may never come back. In fact, Twitter takes some of the opposite measures, like making “time to first tweet” a key performance metric. Companies like Twitter and Reddit, whose cultural influence far outshines their monetization efforts, are particularly susceptible to making decisions that make it harder to fight malicious automation.
Twitter might still be able to weed out the spam, discounting obvious spam by auditing the behavior they are seeing and applying their own machine learning expertise to track patterns that indicate spam. They most certainly do that already, but it is a losing battle on their part because all the malicious actor need to is mimic the actions of a plausible user which is not a high bar. This includes spoofing location information, creating accounts that engage in ongoing varied activity such as tweeting about once every couple days, liking random tweets, making the occasional retweet and direct message. The activity that humans take part in on Twitter and other parts of the web is not so sophisticated that it cannot be mimicked with relative ease. But this adds complexity to the process wherein one could not plausibly send those 10,000 poll votes over night like originally described. It would not be very effective if a malicious actor's automated script made one effective vote every few days in order to throw off the trace. This is where parallelism comes into play.
Scaling up
The script can be run in parallel across different computer instances with relative ease. The script I demonstrated above ran directly in browser of your computer. The actual execution of this tactic at scale would be done under different ways to ensure scalability, stealthiness, and reliability. It could be accomplished in a parallel fashion from one powerful machine, distributed across several in coordination either using hardware owned and operated by the user or via commercial cloud, services such as Amazon Web Services or Google Cloud or Microsoft Azure. Through cloud computing these processes could be scaled massively at almost zero meaningful cost. Whatever costs incurred would be dwarfed by the social impact to be had. Anyone who can afford to buy groceries could afford to operate a powerful automated trolling engine in the cloud. Most services even offer a free tier that would provide more than enough compute capacity to undertake the described activity and because of the obvious necessity for privacy in their offering, there would be no way for any of these services to legitimately recognize these uses, and any attempts in this way to police this behavior would be futile in nature.
Scalability and reliability of the systems built in order to fake online behavior is ensured by knowledge shared between the people involved, whether they are acting in an organized fashion or simply in communication as like-minded individuals. Any defenses will be quickly worked around and most attempts to take on these groups fail in the face of the self-repairing distributed nature of these networks.
The fact that this can all be executed by a reasonably straightforward script is big in terms of the coder's ability to stay ahead of the game. Any time an issue arises, a simple modification is possible and on we plow. Any reasonably proficient software developer will be able to encapsulate any complicated behavior and keep the core application logic code nimble and ever-changeable. This is also what allows one to leverage sophisticated machine learning software in order to build sophistication into the application without having to understand the technology itself. Everything needed is freely available in open source form and integration is rarely more than a few copy and pastes away. The term "script kiddie" refers to unsophisticated juveniles leveraging the work of others without truly understanding how it works. The perpetrators of this sort of activity cover the whole spectrum from script kiddie up to experienced software developers.
The technology companies I have mentioned are aware of this behavior. They are not inherently at fault, as this sort of behavior is inevitable as long as there is motivation for it, but they do have a certain amount of incentive to not raise awareness of this behavior for fear of general backlash and anti-technological sentiment that could result from this discourse. There is so much potential benefit from this technology and it is clear that a lot of the technical discourse is so brutally uninformed and sophomoric, Donald Trump's “The Cyber” rant, for example, that there is an impetus to not bring to attention what havoc network technology is reaping on this political process. The unlikelihood of sober discussion means almost no discussion. It is hard to conceive of which outcome is worst. Rarely are concerns raised without fear mongering as an accompanying action. Both sides of the political spectrum employ fear mongering when it benefits them and that is an unfortunate human side of this equation.
People have a hard time conceptualizing the scale of computer activity and the magnitude of the global network of connected activity. It dwarfs human-centric units of measurement so many times over it's not even close. Under conservative estimates, Facebook's data warehouse stores north of 300 petabytes. That is millions of bytes of data allocated for every single person on the planet and that number is growing quickly. It takes a handful of bytes to store a character, and a handful of characters to store a string, such as your name. The amount of data that is passed around also far exceeds what is even stored about a person. For every person alive on this planet, an unfathomable multiple of that population is passed around in terms of information and activity in the world's computation. The numbers at play in a social media interaction, an online poll, or even the election itself, are so minuscule in comparison with the world's computational capacity it's unfathomable to conceive of a reality where all the forces at play cannot manipulate the human-scale discourse with astonishing ease. It only takes a few thousand automated retweets to elevate a subject from unimportant chatter to national phenomenon. Social proof is a demonstratively powerful thing and it only needs to achieve people-scale grandeur. When compared to computer scale, the human population is a rounding error.
But who?
I hope you are fairly convinced of the plausibility that much of the activity across the web is automated in order to push to start conversations and effect the political process, but I have not yet discussed who is behind this sort of behavior. I am not sure of who is behind this behavior like I am sure that it is going on, but I can speculate that there are lots of groups doing this sort of stuff, with varying motivations behind their actions.
Automation is employed all over the web for all different purposes, and because one human activity can legitimately trigger lots of automated activity, it is hard to even distinguish human activity from bot activity. But the topic of my observation is this election and manufactured support for Donald Trump. It is worth considering who could be behind any deceptive tactics designed to raise the man's profile, perceived influence, and chances at becoming the United States' president. It really comes down to a combination of three factors: means, motivation and potential consequences. This combination is hard to come by and is the reason why everyday activities on the web are less likely to fall peril to these sorts of activities. As with other complicated unethical behavior, there are a lot of reasons it is not taken on by the majority of people. If it were, things would look a lot worse than they do.
Donald Trump and his staff
This is a group with obvious motivation to pull off these sort of tactics. If they succeed, their man becomes president, and that is a good thing in their world. But this answer is highly unlikely to meet the other requirements in this scenario. Trump has shown no indication that he has the capacity to organize these sophisticated technical dealings or the aptitude to surround himself with those who do. So unless he is playing the longest con in the history of the world, I don't think Trump is one with “the cyber.” There are also far too many consequences for if he were to get caught. So while Trump seems unaffected by the consequences of most of his unethical behavior, this kind of behavior does not seem to align with his interests.
The one thing that acts as a trump card in the argument for the unlikelihood of this scenario is Trump' ego and temperament. He obsesses over popularity indicators such as TV numbers has demonstrated an irrational pension for risk. Only someone acting irrationally would put himself on the line this way given the consequences, but a lot of the man's other behavior can be deemed uniquely ego-driven and emotional in nature.
Nation sponsorship, such as Russia
We know about a lot of Russia's involvement in pro-Trump trolling. Vladimir Putin has the means to deploy non-automation at great scale, and the country is certainly engaging in computer-enabled attacks on the US in some capacity, so we can be pretty sure that Russia has some level in automating this sort of behavior. But I think of nation-state involvement as being somewhat inhibited by the negative consequences of being implicated at scale. That is to say, Russia is clearly involved, but I think we put too much attention on them alone. Because of centralized activity, distribution and diversity is an advantage that this group cannot benefit from. This sort of activity is best undertaken by less centralized groups who cannot be tracked or taken offline easily. Russia is more likely engaged in cooperative focused efforts, as opposed to generalized discourse automation at scale.
Distributed terrorist groups such as ISIS
ISIS has a lot to gain from the goings-on in these elections. An unstable group being egged on with anti-Islam rhetoric is exactly what these organizations need to raise their profile and keep up the narrative of the Western enemy that is the United States. And unlike nations like Russia, they have little to lose in taking on these actions. As a distributed group that essentially exists as a scattered assortment of nodes within a self-repairing network, there are few consequences for missteps and a lot to gain from their actions.
I am skeptical of their means for pulling some of this off for a few reasons, though. Because of the distributed nature of their organization it is easier to communicate on the topics of tacit knowledge and mutual understandings as based on bastardized interpretations of the religion and the consequences of actions taken against their people. A coordinated attack on the narrative of American political culture seems out of reach of these sorts of groups. I could not conceive of this sort of organization pulling off the mass-education needed to pull off a clandestine infiltration of American discourse.
Domestic unorganized and semi-organized groups
I see this as the group most likely to be responsible for the bulk of the autonomous pro-Trump movement in this election. As outlined above, the technical capabilities to pull off these sorts of stunts has a fairly low bar. While the average American is not quite technically sophisticated to be the likely perpetrator of this kind of behavior, there are millions that are, and the majority have the capacity to learn the tactics, even if they are just following dumbed-down directives.
There is a notable history of poll light-hearted manipulation in western popular culture. During his run as a satirical host of The Colbert Report, Stephen Colbert made a habit of sicking his fanbase on online polls. You might have heard of the British vessel Boaty McBoatface story, where an online poll to name the ship ended in hilarity. There was the time Shea Stadium was Rickrolled on account of a vote gone awry. There are dozens more internationally famous such events, but the most notable one, for the purpose of our story, is the time “moot” was voted number one in the Time 100 online poll Moot (typically written in all lowercase, as moot) is the screen name of 4chan creator Christopher Poole. 4chan is responsible for most of these incidents on the web. By distributing software, likely a simple automation script, throughout the community, the site's users were able to manipulate the poll with such ease that, in addition to picking moot as the winner, the entire poll was manipulated to read out Marblecake, another inside 4chan reference with a few meanings.
4chan boasts millions of visitors every month, but that is not even totally necessary given the scaling ability a single coder wields. One person, a script, and a few cloud computing credits could easily pull any number of these poll manipulations given a few hours or days to commit to the task. But for reasons other than purely technical, it helps to have a community. The community acts as a place to share ideas, alert other developers of potential exploits and generally encourage one-another to devote their time to such tasks.
The story of 4chan and its user base is one of depth and complexity. Its influence on American culture by way of Internet culture is greatly underestimated. 4chan is the birthplace of meme culture, the Anonymous hacker group and spinoffs like Lulzsec. It is the principle congregation of the “alt-right”. The relationship between the community and the movement is complicated, but they are most certainly intertwined.
It has been well established that 4chan's alt-right troll army has been deployed to wreak havoc on this debate. The motivation behind this community's actions is obviously strong, but fairly complicated. While 4chan and related communities have always been an outlet for racism, it always branded itself collectively as doing it “for the lulz”. That is to say, the widespread use of inflammatory phrases like “niggerfaggot” had been deemed to be used in some amount of self-aware irony. This runs counter to the very serious implications of Trump's overtly xenophobic and barely more subtle racist message. 4chan's political bent seemed to be quite a bit more liberal in ideology, but politics always took a backseat to general disruption of the establishment. It is this logic that allows Donald Trump and Bernie Sanders to both be popular even though they occupy totally different ideological positions. They both represent a wrench in the engine that this group perceives to have disenfranchised them. But it is Trump that has presented, like never before, the opportunity for this community to elect a clown and feel powerful in doing so. This is the motivation, a distributed ego trip from a collective of outcasts describing it as for the “lulz.”
4chan's motivations alone are not enough, we also need to establish means to organize and maintain a collective mission of some kind. Trump's entire candidacy has seemed like a “Twilight Zone” episode to most Americans. We have come to accept some of the reality, but it really makes little sense. His rise is such a radical departure from the expectations that the whole thing seems like somebody's messed up game. I think, in many ways, it is. 4chan, and its general culture, which spreads to other parts of the Internet, notably Reddit and, moreso now than ever, Twitter, is a non-organized hive, but it is a powerful one. But it is unlikely a group like this is able to get serious about these objectives without some amount of centralized power backing it. Enter Palmer Luckey.
Luckey was recently outed as having been a financial backer for parts of the “alt-right” shitposting movement. He is the founder of Oculus Rift, which sold to Facebook for $2 billion and landed him a windfall of about $700 million. That can buy a lot of shitposting. Peter Thiel showed America what happens when you mix radical ideology with heaps of money. Palmer Luckey is the last piece of the puzzle for this domestic technology-driven group. He provides opportunity for members to genuinely devote their time to strategizing and executing on their power. There are a lot of man-hours that go into any sophisticated technical undertaking and Luckey makes this much more possible through funding.
Palmer Luckey, by casually funding actions that are perhaps not illegal or treasonous in and of themselves and by not organizing these undertakings himself, somewhat extinguishes himself from personal consequences from these actions.
With established means, motivations, and relief from consequences, it is this technically-inclined alternative domestic group that is most likely to engage in this sort of discourse tilting automation behavior. And this is what they are doing.
A new world
The social issues that underpin any election are at play, but they are manifesting themselves in different and dramatic ways. With unprecedented distribution of “power” by way of network technology, this election, and all future events of national influence, must be examined through the lens of potential for mass manipulation like is at play here. The reason the election is being hit harder than other events, is that for the most part the motivation is not quite as strong to do so, but as this technology advances, the bar for motivation will drop.
We are in a uniquely experimental phase in history of these technologies. It is not enough for government and organizations in the know to deal with this in secrecy. The Edward Snowden NSA leak and all the revelations that have followed, notably the recent Yahoo email spying, speak to that notion. We need transparency and open centralized discussion about the technology that is shaping our lives. The concerns should be spoken to, but technology also needs to be presented in an optimistic light for this country's well being, as sensationalization and fear mongering for political gain will only hurt us as we strive to stay ahead of international rivals.
Notably absent from this article is any discussion about how Hillary Clinton may be using and benefiting from this kind of behavior. I am supporting Clinton in this election which biases me, but I truly think the story is on Trump's side in terms of the scale and magnitude of this behavior. Trump benefits uniquely from this radical, unfocused disruption. Technology has been a huge part of the Clinton campaign, for better or worse, but I don't think the motivations specified in this assertion apply the same to the powerful Clinton campaign.
It appears the Trump campaign is fizzling out despite all of this, but the story is not about Trump, or Luckey, or 4chan or Putin or ISIS. The discussion is about the scale and power of this kind of automation and the impact it can have on a country's dialogue. These issues are not in the public conscious in any grounded capacity. Without a collective understanding about the impact of bots, our model of the world is skewed away from reality.
Top comments (0)