Who approves Facebook ads bots or people

Bots can seriously affect debates, especially when orchestrated. They can be used to make a statement or to trigger a hashtag trend, as the @DFRLab shows at this point; they can be used to reinforce a message, an article or as an attack on it; they can also be used to harass other users.

At the same time, it is quite easy to identify bots and the so-called botnets without having access to special programs or commercial tools for analysis. This article highlights a dozen of ways we find the most useful for uncovering fake accounts.

First principle

Put simply, a Twitter bot is an account controlled by a program that works similarly to an airplane flown in autopilot mode. Just as the autopilot mode can be switched on and off on an aircraft, user accounts such as bots or people can also appear at different times. The possibilities shown below should therefore be viewed as indicators of bot-like behavior at a certain time, not as an opposing definition of whether an account is a bot or is being used by a human.

Not all bots have a malicious or political background. Automated user accounts can, for example, publish poems, photographs or current news without influencing them.

Our focus is therefore on bots that appear human and reinforce political messages.

In any case, it is important to note that a single factor is not enough to determine bot-like behavior. It is the combination of factors that is crucial. In our experience, the most important factors for identifying bot accounts are the “three aces”: activity, anonymity and amplification.

1. Activity

The most obvious indicator of an automated user account is its activity. By looking at the user page, the number of publications and the number of days since the account was opened, high activity per division is easy to calculate. To find out the exact date of the account creation, all you have to do is swipe the mouse over the "Joined ..." note on Twitter.

The guideline for suspicious activity varies. The computer propaganda team at the Oxford Internet Institute rated an average of more than 50 publications per day as suspicious; this is a widely recognized and applied standard, but it may be in the lower range.

The @DFRLab rates 72 tweets a day as suspicious, which is one tweet every ten minutes over a period of twelve hours. It rates more than 144 tweets a day as highly suspicious.

For example, the @ sunneversets100 account, an amplifier for embassies close to the Kremlin, was created on November 14, 2016. On August 28, 2017, it was 288 days old. During that period, it posted 203,197 tweets. Again, swipe the mouse over “Tweets” for an exact number.

This behavior can be translated into an activity of 705 tweets per day, which corresponds to just under one tweet per minute over a period of twelve hours, every day for nine months. That cannot be counted as a human behavior pattern.

2. anonymity

The second indicator of our “three aces” is the level of anonymity an account exhibits. The less personal information a user account provides, the more likely it is a bot. For example, @ sunneversets100 has a picture of a Florentine cathedral as a profile picture, an incomplete population diagram as a background picture, both an anonymous username and a display name. The only unique selling point is an association with a political action group located in the United States; however this is not at all sufficient for a personal assignment.

Another example is the account of @BlackManTrump, another user account with a very high level of activity, which published 89,944 tweets between August 28, 2016 and December 19, 2016 (see archive). This corresponds to an average of 789 tweets per day.

This account does not disclose any personal information at all. The profile picture and the background are unspecific, the location "USA" is imprecise and the user information is political. No reference to the person behind the profile can be seen on the basis of these general places.

3. Reinforcement

The third main indicator is the reinforcement of messages. A central role of bots is to disseminate publications by other users, be it through so-called retweets (sharing a tweet), likes (so-called "likes") or their quotes. The chronicle of a typical bot therefore consists predominantly of a parade of retweets and precise quotations from headlines of current news, and individual publications that only appear sporadically or not at all.

The most effective way to produce this pattern is by machine monitoring a large number of publications. However, it can also be easier to identify bots with a trained eye, for example by clicking the “Tweet and Reply” tab of a user account and reading its last 200 publications. The number 200 serves as a guideline and is given to enable scientists to obtain a sensible and processable sample; of course, more trained eyes can process more data sets.

For example, 195 of the 200 tweets published by @ sunneversets100 up to August 28 were so-called retweets, including many that were originally tweeted by the Kremlin news agencies RT and Sputnik.

Another degree of sophistication is shown by the change in the publication pattern of @BlackManTrump up to November 14th, 2016. Up until then, all retweets contained the prefix “RT @”, which had been a treacherous prefix for bots; after November 14th this addition did not appear.

Due to the three factors applied to the accounts of @BlackManTrump and @ sunneversets100, which clearly show both-like behavior, these profiles can be viewed as bots.

It is noteworthy that @BlackManTrump showed no activity from November 14th to December 13th, 2016; when the profile resumed its publications, it did so with a much lower frequency and a higher number of seemingly individual tweets. It is therefore correct to say that this account showed both behavior until mid-November, but not that it is currently a bot.

Another type of reinforcement is to program a bot to share the latest news directly from selected sites without comment. Of course, direct retweets make up a common share of traffic on Twitter (readers are welcome to share this article), and in and of itself that's not noticeable. Still, accounts that share uncommented publications over a long period of time are likely bots, such as this one targeting US President Donald Trump that was identified as such in July:

4. Little publications / high engagement

The above-mentioned bots achieve their effect by immensely amplifying the content of a single account. Another way with the same effect is to create large numbers of accounts that retweet the same content once: a botnet.

Such botnets can be identified quickly if they are used to reinforce a single message when the original profile is infrequently active.

For example, an account with the name @KirstenKellog_ (it is now blocked but has been archived here) posted a tweet attacking the media company ProPublica.

As the picture above shows, this was a profile with very little activity. It had only published a total of twelve times, eleven tweets of which were deleted. It had 76 accounts that followed and did not follow any of its own.

Nevertheless, this tweet was retweeted more than 23,000 times and marked as liked (so-called “likes” or as a verb “like”).

In a similar way, another, apparently Russian profile carried out an almost identical attack, which received over 12,000 retweets and likes:

The attack that followed. Archived on August 25, 2017. By August 28, it had received more than 20,000 retweets and likes. This account is also inactive, so far it has posted six tweets, the first on August 25th, and has followed five profiles:

The profile of the subsequently attacking user account. It is not plausible that two similarly inactive accounts are able to generate so many retweets, also taking into account the pounded keywords (so-called "hashtags") #FakeNews (in German "false reports") and #HateGroup ( in German "hate group"). The discrepancy between their activity and their influence suggests that the profiles that drove the tweets from the two accounts belonged to a botnet.

5. Common content

The likelihood of accounts belonging to a single network can be confirmed by looking at their tweets. Unless everyone is posting the same content or types of content at the same time, it stands to reason that they were programmed to do just that.

Within the suspected botnet that amplified @KirstenKellog_'s tweet, for example, there were many accounts that published the same messages, such as this one:

Identical retweets from "Gail Harrison", "voub19" and "Jabari Washington", who also strengthened @KirstenKellog_.

Sometimes bots share rows of tweets in the same order. These three accounts below are part of the same anti-Trump network identified as such in July:

Left to right: Identical Tweets in the same order, July 26th from @CouldBeDelusion, @ProletStrivings and @FillingDCSwamp. Also of note is the way in which the publications headlines the articles they share.

On August 28, these three profiles shared tweets again in the same order; @ProletStrivings added a retweet:

Left to right, screenshots of @CouldBeDelusion, @FillingDCSwamp, and @ProletStrivings profiles showing the same order of tweets shared. Please note the text "Check out this link" (in German "Look at this link"), in each first tweet, presumably a marker of another automatically shared message. The screenshots and archives were taken on August 28th.

Such identical series of tweets are classic signs of automation.

6. The secret society of silhouettes

The simplest bots are particularly easy to spot because their creators didn't bother to assign a profile photo to the accounts. Such profiles used to be called "eggs", which comes from a time when Twitter displayed an egg as the standard profile picture. Today these standardized profile pictures represent human silhouettes.

Some users use silhouettes as a profile picture for the most innocuous reasons, so this phenomenon cannot be regarded as an indicator for bot accounts. However, that changes as soon as a number of accounts that retweet and like look like this:

... or if the following accounts (so-called "followers") of a profile give the impression of forming a place for the "secret society of silhouettes" ...

... then this is a sure sign of bot activity.

7. Stolen or shared profile photo

Other bot creators are more careful and mask anonymity by using profile photos from other sources. This is a good way to check an account and your profile picture for authenticity, using tools such as the so-called "Reverse Image Search". If you are using Google Chrome as your browser, all you need to do is right-click on the image and select the "Search for image with Google" tab.

You search Google for the profile photo of "Shelly Wilson," a likely bot.

When using other browsers, right-click on “Copy image address”, enter this link in the Google search, press “Enter” and then click on the “Images” tab.

In both cases, the search will show pages with matching pictures, which in turn give an indication of whether a profile picture was likely stolen for a bot account:

In the case of "Shelly Wilson", some Twitter accounts share the same profile picture, which means that these accounts can be assigned to bot accounts:

8. Bot in the name?

Another indication of a probable bot is the selected handle (“username”, on Twitter starting with an “@”). Many bots have handles that only contain alphanumeric ciphers created by an algorithm. These include, for example:

Other bots have a human-appearing name in the profile, but it does not match the handle:

Support independent journalism!
Our goal is an enlightened society. Because only well-informed citizens can solve problems and bring about improvements in a democratic way. Donate Now!

Left to right: Sherilyn Matthews, Abigayle Simmons and Dorothy Potter, whose handles are called NicoleMcdonal, Monique Grieze and Marina.

There are also bots that have a typically male name, but whose profile picture shows a woman (an incident that seems to occur far more often with bots than that they use a handle that sounds feminine, but show a man in the profile picture) ...

Three other accounts on the same botnet: Todd Leal, James Reese, and Tom Mondy, archived August 24th and 28th, 2017.

... or masculine-sounding handles, with female names and profile pictures of women ...

From left to right: "Irma Nicholson", "Poppy Townsend" and "Mary Shaw", the handles of which refer to the male names David Nguyen, Adrian Ramirez and Adam Garner.

... or something completely different.

Erik Young, a woman who loves Jesus, from the same botnet.

Everything indicates that the account is a fake, faking a person (often a young woman) is trying to get attention. How to determine what type of fake it is or whether it is a bot depends on its behavior.

9. The Twitter of Babel

Some bots are political and always tweet the same position. Others are used commercially and appear to have been acquired by the highest bidder, regardless of the contents of the bot's profile chronicle. Most of those accounts are apolitical, but can also be used to reinforce political messages.

Such botnets are often marked by great linguistic differences. For example, a look at the retweets from Erik Young, the "Woman Who Loves Jesus" shows that this account distributes content in Arabic, English, Spanish and French:

A similar look at content from the anonymous and profile picture-less account @multimauistvols (display name "juli komm") shows tweets in English ...

… Spanish…

... Arabic ...

... Swahili (according to the translation service Google Translate) ...

... Indonesian ...

... Chinese ...

… Russian…

... and in Japanese.

In physical life, anyone who speaks any of these languages ​​probably has better things to do than promote YouTube videos.

10. Commercial content

In fact, advertising is a classic reference to botnets. As stated above, botnets seem to exist primarily for this purpose, with less use for political messages. When used for this purpose, their previous promotional use often reveals them as bots.

A good example of this is the weird botnet that retweeted a political message from the @ every1bets account, which is commonly used to advertise gambling.

The retweeting accounts had a wide variety of identities, as the following list shows:

What these accounts had in common was the high number of advertising tweets.

Profiles that have a high number of retweets of this type, especially in several languages, but mostly part of a commercial botnet that were acquired to promote and reinforce messages from users.

11. Automation programs

Another clue about the likely automation of accounts is the use of URL shortening services (URL stands for web address).These are primarily used to monitor the number of visits for a specific link, but the frequency with which such services are used can be an automation indicator.

For example, the Angee Dixson account identified as a bot, which used the face of the German supermodel Lorena Rae as a profile picture, shared a large number of right-wing populist messages. Each of these tweets was characterized by the use of links created with the help of the url shortening service ift.tt:

URL shortening services are programs that are produced, for example, by the company ifttt.com. Their use allows profiles to automate their tweets according to a number of criteria, such as retweeting a tweet with a specified hashtag. A chronicle that is full of ift.tt cuts is therefore likely that of a bot account.

The use of other URL shortening services can also indicate automation if they repeatedly appear on a timeline. For example, the shortening service ow.ly is linked to the social media management platform HootSuite; some bot accounts have been known to have long rows of shared tweets created by ow.ly. Twitter's own TweetDeck service enables users to use a variety of URL shortening services, including bit.ly and tinyurl.com.

Here, too, the use of such services is part of virtual life, but accounts that use these services excessively should be checked for possible botexistence.

12. Retweets and Likes

A final clue about the work of a botnet can be obtained by comparing retweets and likes of a specific message. Some bots are programmed to both retweet and like tweets. In that case, the number of retweets and likes of such a profile is almost identical, as is the chronicle order of several compared accounts. For this the following example:

The second account of the attack on ProPublica.com shows retweets on the left and likes on the right.

In this example, the difference between the number of retweets and likes is only eleven responses - that's a difference of less than 0.1 percent. These very accounts retweeted and liked the tweet, in the same order and at the same time. With a sample of 13,000 profiles, it is very unlikely that this is a coincidence. It indicates the existence of a coordinated botnet that has been programmed to like and retweet the same attack.

Summary

Bots are an existential part of life on Twitter. Many are perfectly legitimate; those that are not tend to share core characteristics.

The most common of these characteristics are activity, anonymity and amplification, the so-called three “As”, but there are other characteristics. The use of stolen profile pictures, alphanumeric handles and unrelated names can reveal a wrong account. This can also be an excess of commercial tweets or a multitude of languages ​​used.

However, the most important thing when spotting bots is attention. Users who can identify bots themselves are less susceptible to attempts at manipulation. They may be able to report botnets and help eliminate them. After all, bots exist to influence people. The aim of this article is to help human users identify the signs of this.


Ben Nimmo is a Senior Fellow on Intelligence at the Atlantic Council's Digital Research Lab (@DFRlab). Ben's article was translated by Sarina Balkhausen, fellow at # Wahlcheck17, a pop-up newsroom about the federal election, an initiative of CORRECTIV, First Draft, Google News Lab and Facebook. Neither Ben nor Sarina are bots.

We publish the article with the kind permission of "Medium".