Next Article in Journal
Switchable DBR Filters Using Semiconductor Distributed Doped Areas (ScDDAs)
Next Article in Special Issue
Magniber v2 Ransomware Decryption: Exploiting the Vulnerability of a Self-Developed Pseudo Random Number Generator
Previous Article in Journal
Artificial Neural Network (ANN) Enabled Internet of Things (IoT) Architecture for Music Therapy
Previous Article in Special Issue
Effective DGA-Domain Detection and Classification with TextCNN and Additional Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantifiable Interactivity of Malicious URLs and the Social Media Ecosystem

1
Department of Computer Science, Tunghai University, Xitun District, Taichung 407224, Taiwan
2
Department of Computer Science, University of California, Davis, One Shields Ave., Davis, CA 95616, USA
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(12), 2020; https://doi.org/10.3390/electronics9122020
Submission received: 19 October 2020 / Revised: 11 November 2020 / Accepted: 23 November 2020 / Published: 30 November 2020
(This article belongs to the Special Issue New Challenges on Cyber Threat Intelligence)

Abstract

:
Online social network (OSN) users are increasingly interacting with each other via articles, comments, and responses. When access control mechanisms are weak or absent, OSNs are perceived by attackers as rich environments for influencing public opinions via fake news posts or influencing commercial transactions via practices such as phishing. This has led to a body of research looking at potential ways to predict OSN user behavior using social science concepts such as conformity and the bandwagon effect. In this paper, we address the question of how social recommendation systems affect the occurrence of malicious URLs on Facebook, based on the assumption that there are no differences among recommendation systems in terms of delivering either legitimate or harmful information to users. Next, we use temporal features to build a prediction framework with >75% accuracy to predict increases in certain user group behaviors. Our effort involves the demarcation of URL classes, from malicious URLs viewed as causing significant damage to annoying spam messages and advertisements. We offer this analysis to better understand OSN user sensors reactions to various categories of malicious URLs in order to mitigate their effects.

1. Introduction

The attack vectors that users of Online Social Networks (OSNs) face have been evolving as the various bad actors learn to manipulate this new aspect of the cyber landscape. One of these newly evolving attack vectors is the news creation and dissemination cycle. Traditionally the structure for media dissemination was a top-down arrangement; news was typically published by well-trained reporters and edited by a skilled team. In this fashion, these professionals acted as a gatekeeper of sorts, ensuring higher journalistic integrity and correctness of the news. In contrast, news is now being created for use and spread on OSNs by users. This opens the door to the news generation process, and the associated URLs, to be utilized as nascent attack vectors.
With the modern structuring of this information creation and consumption process, social recommendation systems play an increasingly critical role as they determine which users will see exactly what information based on the characteristics of the individual users as well as specific features from the articles. Unfortunately, inappropriate information will diffuse on user-generated content platforms much more readily than traditional media, attributable to two primary factors: (1) The connectivity of social media makes information diffusion deeper and wider. (2) Users may wittingly or unwittingly boost inappropriate dissemination cascades with their own comments on the articles.
The major research efforts in the area of OSN security are concerned with detecting malicious accounts rather than normal user accounts. However, attackers can exploit either newly created fake or existing latent compromised accounts to avoid state-of-the-art defense schemes since most are based on verified attacker behavior and trained by machine learning algorithms either by lexical features or by account characteristics. Relatively less work has been done to measure and consider the influence of the actual malicious content. Therefore, our motivation is based on two research questions: (RQ1) For discussion threads having clearly malicious content, do they have a larger cascade size when compared to other discussion threads that do not contain malicious content? (RQ2) In a discussion thread, is there a significant influence between the prior and the latter comments surrounding a malicious comment? Specifically, we are wondering whether audiences will change their behavior when they see a malicious comment which is being promoted by a social recommendation system.
In this paper, we design two experiments trying to answer the above two questions. For cascade size between target and non-target post threads, we evaluate with the bandwagon effect experiment. The findings indicate that they both basically follow the same cascade model. However, their final cascade size are extremely different for at least two reasons: (1) users’ reactions; and (2) social recommendation design. Afterwards, we turn our attention to the fine-grained influence of a user-generated comment. We define an Influence Ratio (IR) for every comment to evaluate its influence based on the ratio over its upcoming activities and its preceding activities. Our framework achieves more than 75% accuracy on both critical and light damage URLs in predicting the upcoming activity increase or decrease.
Our results also indicate that the relative position, in the context of the chronological positioning, of a comment plays a critical role in contributing to the influence that the comment wields. For example, Figure 1 (post_id = 10152998395961509) shows a labeled advertisement URL occurred in the very late stage (2802th out of 2844 total comments) while Figure 2 (post_id = 10153908916401509) shows a labeled pornography URL occurred near the middle of the comment chronology (470th comment of 942 total). We found that, with regards to position, critical and those lesser threat level URLs are presenting at different times in the discussion—the light threat URLs tend to be posted later chronologically while critical threat URLs tend to be in the middle of a post’s timeline. This phenomenon is borne out very obviously in the CNN public page, however this presentation is not as dramatic on FOX News. There are at least two reasons for the chronological disparity between the different threat level URLs: (1) Users tend to leave the discussion when they feel there was an obvious ad posted, as in Figure 1. (2) Compared to the lower level threats, attackers who spread critical malicious URLs act in a more strategic manner—they choose the most opportune timing to achieve the greatest amount of influence.
The rest of this paper is organized as follows. Section 2 illustrates how Facebook and social recommendation systems work. In Section 3, we define necessary terms and provide a detailed description for our dataset. The bandwagon effect cross-validation is described in Section 4. We define and predict the influence ratio in Section 5. Related work and our conclusion are given in Section 6 and Section 7, respectively.

2. Facebook and Social Recommendation System

In this section, we introduce one of the most popular online services of social media—Facebook—from its humble beginnings as a sort of simple digitized social yearbook limited to only certain universities to a worldwide, incredibly complex, and multi-functional platform. We also describe the information consumption process between Facebook and OSNs Users.

2.1. Facebook Public Pages

Facebook was launched in 2004, initially providing a platform for students to search for people at the same school and look up friends of friends. Users updated personal information, likes, dislikes, and current activities. While doing this, they also kept track of what others were doing, and used Facebook to check the relationship availability of anyone they might meet and were interested in romantically (https://www.eonline.com/news/736769/this-is-how-facebook-has-changed-over-the-past-12-years). As Facebook grew quickly, users were not satisfied with merely following the personal status of their close friends on the network. Furthermore, users demonstrated an interest in public affairs and news. For this reason, public pages on Facebook were created and have become places where users receive news and information selected and promoted by news feeds, which are constantly updating lists of stories in the middle of one’s homepage, including stories regarding: (1) friends’ activities on Facebook; (2) articles from pages where an user is interested (Liked or Followed); (3) articles that your friends like or comment on from people you are not friends with; and (4) advertisements from sponsoring companies and organizations (https://www.facebook.com/help/327131014036297/). With these new media publication venues on Facebook, users interact with strangers on various public pages—discussing news published by commercial media companies, announcements by public figures, sharing movie reviews, gossiping about an actor, or criticizing the poor performance of a particular sports teams. According to Hong et al. [1], there are more than 38,831,367 public pages covering multiple topics including Brands and Products, Local Business and Places, and Companies and Organizations (https://www.facebook.com/pages/create).

2.2. Social Recommendation System

Most highly trafficked online social media sites contain some variation of a dynamic social recommendation system [2]. It is a continuous process cycle, which includes two entities, social computing platform and active users, and the four processes shown in Figure 3. Here, we explain the processes in more detail.
  • Deliver: Large-scale and user-generated data have been disseminated on OSNs. However, only appropriate information is delivered to corresponding audiences.
  • Digest: When users see the news, they have the chance to join the discussion by actively typing their opinion, less actively clicking reactions, or passively doing nothing.
  • Derive and Evaluate: Recommendation systems will collect a large volume of user interaction data and modify the algorithm to better attract attention from users (mostly because of attention economy [3]). The evaluation step gives a chance for Facebook to modify the social algorithms to deliver more appropriate (Step 1) information to users. The primary concern for Facebook is to maximize clicks on advertisements. This is primarily accomplished by maximizing time spent on Facebook by the user.
Attackers logically attempt to maximize the influence they wield for every malicious campaign—in effect, having more people see their malicious content, click it, interact with it, or trust it. With the application of behavioral targeting, we believe the bad actors spread URLs that will be more relevant to audiences, whose patterns could be collected from data mining or speculation. For example, the collection of bad actors involved with spreading fake news tend to chronologically target the planting of their fake news as well as topographically targeting the best locations to plant fake news (e.g., politics-related Facebook pages or articles). Other bad actors run accounts that are hired by commercial enterprises that have a more limited scope and primarily care about their business. The common thread is they all make use of the social recommendation system. We have seen that social recommendation system design actually increases the damage of malicious URLs since it offers a way for attackers to spread harmful content at the right place and at the right time. Vosoughi et al. indicated that false news is more novel than true news and humans are more likely to share novel information online [4]. Therefore, the social recommendation system will boost the “rich get richer” effect.

3. Data Description and Labeling

In this section, we define the necessary terminology used in this paper. After providing a high level overview of the discussion groups dataset, we show how we label filtered URLs into different categories.

3.1. Terminology

We use the following terminology to describe the concepts in our work more exactly:
  • Page: A public discussion group. In this study, we only consider two main media pages: CNN ( p a g e _ i d = 5550296508 ) and FOX News ( p a g e _ i d = 15704546335 ).
  • Original Post: An article on a Facebook discussion group.
  • Comments: Text written in response to an original post.
  • Reaction: Emoji to comments or original posts, including “Like”, “Love”, “Haha”, “Love”, “Sad”, and “Angry”.
  • Post thread: The original post and all corresponding user activities (Comments and corresponding Reactions), ordered by their timestamps.
  • Target post thread: Post threads which have at least one comment embedded with malicious URL(s).
  • Non-Target Posts: Post threads which have no embedded malicious URLs.
  • Time Series (TS): T S c r e a t e d indicates a time period j following the time of the original post, measured in minutes. T S f i n a l refers to the precise time 1 h after the original post was created (i.e., f i n a l = 60).
  • Number of comments ( N c o m m e n t ( p o s t , T S i ) ): The number of post comments collected at T S i
  • Accumulated number of participants ( A c c N c o m m e n t ( p o s t , T S i ) ): The number of post comments between T S i and T S i 1

3.2. Crawled Dataset

To this end, our data were cataloged with the use of an open source social crawler called SINCERE (https://github.com/dslfaithdev/SocialCrawler) (Social Interactive Networking and Conversation Entropy Ranking Engine), which has been created and refined by our research group over several years. We employed it from 2014 to 2016 to collect post threads on both CNN and FOX News public pages to see the difference between left-wing and right-wing discussion. Detail information was stored, including timestamps of each comment, Facebook account identification numbers, and the raw text of comments and articles. In total, we have 48,087 posts, 88,834,886 comments and 189,460,056 reactions for both pages. We describe the full dataset in Table 1.

3.3. Labeling URLs

Typically, a URL contains three parts: (1) a protocol; (2) a host name; and (3) a file name. In this paper, we focus on URLs which use HTTP and HTTPS protocols. Moreover, we focus on the host name itself. We first use a well-known Whitelist ‘facebook.com’, ‘youtube.com’, ‘twitter’, ‘on.fb.me’, ‘en.wikipedia’, ‘huffingtonpost.com’, ‘foxnews.com’, ‘cnn.com’, ‘google.com’, ‘bbc.co.uk’, ‘nytimes.com’, ‘washingtonpost.com’ to do a first-step filter. There is no doubt that there is lots of inappropriate content on our whitelists, such as ‘facebook.com’ and ‘youtube.com’. However, in this work, the scope is on normal blacklists adopted by third-party cyber-security engines. Accordingly, we then employ the daily-updated Shalla Blacklist service [5], which is a collection of URL lists grouped into several categories intended for the usage with URL filters, to label and trace the behavior of URL influence. Note that we do not assume all URLs filtered by Shalla are completely malicious. Among the 74 categories listed, we manually divided targeted URLs into two classes: Light and Critical. The explanation of each category is as follows:
  • Light:
    Advertising: Includes sites offering banners and advertising companies.
    Shopping: Sites offering online shopping and price comparisons.
    Gamble: Poker, casino, bingo, and other chance games as well as betting sites.
    Porn: Sites with sexual content.
  • Critical:
    Download: This covers mostly file-sharing, p2p, torrent sites, and drive-by-downloads.
    Hacking: Sites with information and discussions about security weaknesses and how to exploit them.
    Spyware: Sites that try to actively install software (or lure the user to do so) in order to spy on user behavior. This also includes trojan and phishing sites.
    Aggressive: Sites of obvious aggressive content, including hate speech and racism.
    Drugs: Sites offering drugs or explaining how to make drugs (legal and illegal).
    Weapons: Sites offering weapons or accessories for weapons.
    Violence: Sites about killing or harming people or animals.
We classify others as Benign if they are not in the Whitelist, Light, or Critical classes. The detailed number of each category is listed in Table 2.

4. Post-Level Influence

Heterogeneous posts are updated and refreshed at tremendous speed and include videos, photos with attractive headlines, and assorted topics such as international affairs, elections or entertainment. We are interested in why malicious URLs often occur on only some post threads. We applied the model proposed by Wang et al. [6] to gain insight into how those malicious URLs may influence the growth of the conversation.

4.1. Bandwagon Effect and Attacker Cost

A phenomenon known as the bandwagon effect explains how individuals will agree with a larger group of people who they may not normally agree with, but do so to feel a part of a group—individuals are more likely to be with those sub-groups that share similar thoughts but feel uncomfortable in the presence of minority groups that have different ideas [7]. Many voting behaviors are related to this effect; voters may or may not follow their own conscience to make a voting decision but may just follow the majority opinion [8].
From the result obtained by Lai et al. [9], when considering the posts targeted with a comment that includes a malicious URL, we see the most commonly attacked articles tend to generate large amounts of discussion. Moreover, targets may be those suggested by the Facebook social recommendation system to the attacker.
The following example indicates how a large number of majority opinions might be identified from a Facebook discussion. Assume three posts— p o s t A , p o s t B , and p o s t C —have been posted on a public page at around the same time, the original posters’ identities are irrelevant. In addition, assume there exist three different users— u s e r A visits the page and browses all three posts, has no signals from others for making a decision to engage with the post, and the user subsequently decides to only comment on p o s t B because it was subjectively the most interesting one to them. Five minutes later, u s e r B visits the same page and sees that only p o s t B has a comment. This user then checks p o s t B first and decides to add their own reply, either in response to the original post or u s e r A . Note that until now there are no comments on either p o s t A or p o s t C . Some short time later, u s e r c checks the page and finds that p o s t B has more than 10 comments, while p o s t A and p o s t C still have 0 comments. He then decides to add a comment to p o s t B since p o s t B is the first post that is pushed to the user by Facebook because, at that moment, p o s t B has a relatively larger share of public attention compared to p o s t A or p o s t C . This is an example of the information cascade phenomenon first proposed by Bikhchandani, Hirshleifer, and Welch [10], and most social media recommendation systems intensify this phenomenon—information and user activities are automatically selected by an algorithm, although most users do not realize this when participating in OSN discussion groups.

4.2. Prediction Model and Evaluation Method

To differentiate the information cascade model between target and non-target post threads, we describe a system designed to use the time series and number of current comments to predict how many new users are likely to participate in each respective thread. The Discussion Atmosphere Vector (DAV) [9] defined in our previous work used the definition of accumulated number of participants given in Section 3, using 5 min for the i value and 2 h for the t f i n a l value.
D A V ( P o s t ) t n = [ A c c N c o m m e n t ( P o s t , t 1 ) , A c c N c o m m e n t ( P o s t , t 2 ) , , A c c N c o m m e n t ( P o s t , t n ) ]
In the bandwagon effect model proposed by Wang et al. [6], the numbers of comments with respect to each time window after a post has been created can be used to build matrices for each public page G to predict the final number of comments by machine learning and statistical methods. In other words, two post threads p o s t A and p o s t B are likely to have the same scale of cascades if in each timestamp i such that:
D A V ( P o s t A ) t i D A V ( P o s t B ) t i
We then defined a distribution matrix D, with each element D i j representing a set of posts p o s t G , including the final number of comments we crawled N c o m m e n t ( p o s t , T S f i n a l ) and aggregate number of comments j ( j = N c o m m e n t ( p o s t , T S i ) ) at time i.
D i j ( G ) = { N c o m m e n t ( p o s t , T S f i n a l ( p o s t ) ) | j = N c o m m e n t ( p o s t , T S i ( p o s t ) ) ; p o s t G }
Based on the distribution matrix D, we used a bootstrapping method [11] to construct prediction matrix M.
M i j = B o o t s t r a p p i n g ( D i j )
The matrix M is used to create a prediction function F p r e d i c t that collects two inputs from any new post thread: observed time series T S o b ( p o s t ) and corresponding feature N c o m m e n t ( p o s t , T S o b ) . According to M, we obtain the result using the following equation:
F p r e d i c t ( T S o b ( p o s t ) , N c o m m e n t ( p o s t , T S o b ) = M T S o b ( p o s t , N c o m m e n t ( p o s t , T S o b ) )

4.3. Result and Discussion

Our bandwagon cross-validation between target and non-target post threads are offered in Table 3 and Table 4. Note that the prediction function can sometimes fail for either of two reasons: insufficient features for a post to match M in the testing data or insufficient existing posts within the training data. In our experiment, we have enough training data, so unpredictable posts 1 % for both pages are posts which do not have enough activities for M to predict the final size of cascade.
Basically, there are no obvious differences regarding the first 2 h of activities with respect to final number of comments between target and non-target post threads. This suggests that malicious URLs did not affect the life cycle of post threads—people still engaged the target post threads, under the threat of malicious URLs. On the other hand, our result also indicate that the Facebook social recommendation system continued to deliver post threads which have malicious URLs to audiences, similar to the way it treats normal post threads.
Recall that the bootstapping method in this experiment is only providing the lower bound. For example, if M 5 , 5 = 100 , this means that any post that satisfies five comments in the first five minutes would have 100 comments or more. However, 200 and 2000 are both greater than 100, but the scale are not the same. To consider the final cascade of comments between targets and non-targets, we also conduct two-sample Kolmogorov–Smirnov (KS) tests to compare the distributions of the final number of comments between those two sets. The results are shown in Table 5 and Table 6. In general, for both the FOX News and CNN pages, the final number of comments of target post threads is obviously greater than non-target ones. This can be attributed that attackers are led by the Facebook social recommendation systems. In other words, their targets are not chosen by themselves but mostly by social algorithms. Moreover, we also noticed that FOX News attracts more people to join the discussion, rather than CNN (about four times the mean number of comments per post threads).

5. Influence Ratio of a Comment

Post thread is a basic unit to consider the interaction of users. In the previous section, we show that the final size of cascade (number of comments) and the first 2 h of activities between target posts and non-targets are very similar, however the scales are extremely different from KS test. In this section, we turn our attention to the temporal neighbor—users interact with a time period on the same post threads, even though they are not mutual friends on Facebook.

5.1. Preceding and Upcoming Activities

Consider an original post released by a news media outlet on its public page. This post can be a video, a photo, or even just a short paragraph of text. There can be many users activities toward this particular post. Consider an original article ( p o s t ). The corresponding comments are C 1 , C 2 , , C n , ordered by their created timestamps ( T i m e ( C 1 ) , T i m e ( C 2 ) , , T i m e ( C n ) ). To evaluate the influence of a comment C i with its created time T i m e ( C i ) , given a time window Δ T , we define Influence Ratio (IR) as the log ratio between all activities which occurred in the previous time window T i m e ( C i ) Δ T and the upcoming time window T i m e ( C i ) + Δ T .
I n f l u e n c e R a t i o ( C i , Δ T ) = l o g ( c o u n t ( a c t i v i t i e s ) ( T i m e ( C i ) + Δ T ) c o u n t ( a c t i v i t i e s ) ( T i m e ( C i ) Δ T ) )
we classify the comment itself in the time period ( T i m e ( C i ) Δ T ) to avoid the denominator becoming z e r o . Activities include all comments, likes, and reactions. If IR is greater than 0, it means people will be more interested in this post and this comment in the next time slot T i m e ( C i ) + Δ T .

5.2. Predict and Evaluate Influence Ratio

The time differences between two consecutive comments C n C n 1 vary greatly. For example, several studies have shown that post threads would have a rich get richer [12] and bandwagon effect [13], which indicates the nearby comments and reactions are critical to interact and influence with each other—everyone is a potential amplifier. Consider two users U s e r ( C i ) and U s e r ( C j ) who contribute C i and C j . If i j is quite close to 1, they would have a higher chance to interact with each other since: (1) the social recommendation system delivered this post to both users because of their past activities and browsing footprints; and (2) they remain online on social media at around the same time (this is not always true since we also need to consider the time difference between T i m e ( C i ) and T i m e ( C j ) ). However, they may not be friends with each other but just have overlapping active time on Facebook. To consider the volume of specific time period, we define a function C o u n t A c t i v i t y ( p o s t , [ T i m e ( B e g i n ) , T i m e ( A f t e r ) ] ) which refers to all activities (comments, likes, reactions, and replies) for p o s t within T i m e ( B e g i n ) to T i m e ( A f t e r ) . To consider the influence and role of a comment in a post thread, given an influenced threshold δ T and preceding audience number N p r e v , we define Preceding Influenced Vector (PIV) of a comment C k in a post thread P o s t as follows:
P I V i ( P o s t , C k ) = C o u n t A c t i v i t y ( P o s t , [ T i m e ( C k ) i δ T , T i m e ( C k ) + i δ T ] )
Our goal is to predict IR, the volume of the upcoming time windows. In other words, for any comment C k in an article p o s t , the influence ratio problem predicts whether the upcoming audiences will be greater than the preceding audiences via a classifier ( c > t a r g e t , n o n t a r g e t ) based on one set of P I V ( P o s t , C k ) . Hence, for two arbitrary comments C m of P o s t m and C n of P o s t n , they will be more likely to have the same trend for the upcoming number of activities such that:
P I V ( P o s t m , C m ) P I V ( P o s t n , C n )
We use comments with benign URLs as a training data, trying to predict the IR trends for both light URLs and critical URLs. We set time window δ T as 1 min and the number of components of PIV as 60, which means that for each comment, we assume the time period up to 1 h will influence the IR. We then normalized the input PIV to prevent overfitting. As output, we labeled the positive value of IR as increase while negative value of IR as decrease. We applied two popular machine learning classifiers (1) Adaboost; and (2) Gaussian Naive Baynes from scikit-learn [14]. For Adaboost, we set the number of estimators = 50 and learning rate = 1. The detailed results are shown in Table 7 and Table 8. Overall, we can achieve greater than 75% F1-score on predicting the Influence ratio for both Light and Critical URLs, and the result for Light category is better than Critical category. Moreover, we summarize our findings as follows:
  • CNN vs. FOX News: There are no obvious difference between CNN and FOX News with regards to predicting IR for the more critical threats versus the lower threat malicious campaigns. We think the reason may be both CNN and FOX News Feeds were controlled by the same social recommendation system. Hence, user activities with respect to temporal features can be predicted with the same amount of ease on either feed.
  • Increase vs. Decrease: For most cases, the F1-score on predicting increase is better than decrease (CNN Benign to Critical and FOX News both cases). We think it is related to our previous experiment regarding bandwagon effect. We also notice that, on the CNN page, the IR of light malicious campaigns tend to decrease, which can either be audiences leaving the discussion because of the URL or the attacker strategies being inefficient to cause popularity.
  • Classifiers: Better results were obtained by Adaboost. For social media data, since PIVs are not independent with respect to one another, Naive Baynes does not work well given the dependent variable constant. In other words, when considering group behaviors on OSNs, we believe that reinforcement learning classifier is better than Naive Baynes, which is based on probabilistic classifiers.

5.3. Life Cycle Stage

We noticed that the temporal ordering of activities on post threads are quite interesting. The audience generally rapidly increases to the peak, and then growth of the audience decays more slowly as time goes on. Figure 4 visualizes the relationship among ratio, IR, and elapsed time from the last comment. We model the life cycle of post threads into three stages:
  • Rapid growth: On the facebook page for FOX News, there is an obvious watershed at 50%: From the first comment to about the midway point in post threads, many users join the discussion; usually in the next time window each comment will have five times the activities as the previous time window, and the time difference with the last comment is usually smaller than 1 min. However, on the CNN page, we observe that IR experiences several huge discussions; the reason may be people provide lots of reactions for some interesting comments.
  • Slow Decay: For both CNN and FOX News, we observed that, from 50% to about 85%, the volume of comments faces an obvious decline. At the same time, IR lightly decays and elapsed time becomes larger.
  • Dormancy: At this stage, the thread has basically passed its shelf life. The elapsed time goes up to more than 10 min while the IR has fallen to almost 0.
We also noticed that attackers are more likely to spread malicious URLs at either the Slowly Decay or Dormancy stages on CNN, while, on FOX, the ratio seems to be a uniform distribution, as shown in Figure 5.

5.4. Attackers’ Footprint

In addition to the post thread life cycle described above, we are also interested in the user’s other activities. Are they actively expressing their opinions, or just acting as a one-time inappropriate information generator? We consider user activities on more than 40,000 public pages around the world from 2011 to 2016 on Facebook. Figure 6 shows that, no matter the numbers of comments, number of likes, or number of reactions, those users who spread Critical-type malicious URLs occur more often than Benign- and Light-type users. Considering their purpose, we noticed that Light-type users tend to lure users to commercial websites. However, Critical-type users comments usually advocate relatively personal belief and values—which makes them heavier Facebook users and tend to influence others. However, on FOX News, only Light-type users have less activities than the other two, so there is no obvious difference between Critical-type users and Benign users.

6. Related Work

Security issues surrounding OSN platforms have been growing in importance and profile due to the increasing number of users (and subsequent potential targets) of social media applications. Our related work mainly falls into two categories: (1) popularity on social media; and (2) cyber attack techniques on social media.

6.1. Information Diffusion and Influence

Castillo et al. showed different categories of news (news vs. in-depth) will have different life cycles regarding social media reactions [15]. In fact, there are many works aiming at observing and predicting the final cascade size of a given post or topic. According to Cheng et al., these factors may include content features, authors features, resharer features, and temporal features which make a cascade size more predictable [16]. As to the last issue, several papers exploit the reaction for a given fixed time frame to predict whether a post thread will be popular or not [17,18,19].
Cascade can also be interpreted from the audience’s perspective, i.e., why people spend lots of time on social media to share her own opinions to public. Marwick et al. proposed a many-to-many communication model through which individuals conceptualize an imagined audience evoked through their content [20]. Hall et al. [21] demonstrated the impact of the individual on an information cascade. In the interactive communication model [22], in order to participate in the so-called attention economy, people want to attract “eyeballs” in a media-saturated, information-rich world and to influence audiences to like their comments/photos [23,24,25]. Hence, users strategically formulate their profile and participate in many discussion groups to increase attention.

6.2. Cyber attack Analysis on Social Media

Although increasing importance has been attached to security issues on social media, most works focus on pursuing a perfect classifier to detect malicious accounts or users with commonly used profile characteristics such as age, number of followers, geo-location, and total number of activities [26,27]. With the development of new security threats on social media such as cyberbullying or fake news, recent research uses social science to understand collective human behavior. Cheng et al. studied the activity differences between organized groups and individuals [16]. Charzakou et al. noted people who spread hate are more engaged than typical users [28]. Vosoughi et al. found that false news was more novel than true news mainly because of humans, not robots [4].

7. Conclusions

In this paper, we describe our work regarding attacker intention and influence from large-scale malicious URLs campaigns using the public Facebook Discussion Groups dataset. Specifically, we focus on examining the differing characteristics between CNN and FOX News discussion threads ranging from 2014 to 2016.
We describe how social recommendation systems work for both target and non-target threads. Moreover, we define an Influence Ratio (IR) for every visible comment on Facebook based on the ratio between the upcoming activities and the preceding activities. We also propose a context-free prediction system to predict whether the trends will decrease or increase with a F1-score over 75%. From these results, we perform an in-depth analysis on different categories of malicious campaigns. Compared to those comments embedded with more critical level threats such as malicious URLs, some lower level threats, such as advertising or commercial shopping URLs, appeared at the very end of the discussion thread. The IR for those commercial sites for at least two reasons. (1) People just ignored those since they already know it only hinders the readability. (2) People do not want to check those posts anymore. However, the program bot did not update to the new-coming information.
The initial results we obtained provide us new insight regarding how malicious URLs influence both post thread life cycle and audience activities with the Facebook social recommendation algorithm. Our current observations enables us to reconsider new response strategies in handling inappropriate information on social media.

Author Contributions

Conceptualization: C.-M.L. and H.-J.S.; Writing—original draft: C.-M.L. and J.C.; and Writing—review and editing C.-M.L. and J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hong, Y.; Lin, Y.C.; Lai, C.M.; Wu, S.F.; Barnett, G.A. Profiling Facebook Public Page Graph. In Proceedings of the 2018 International Conference on Computing, Networking and Communications (ICNC), Maui, HI, USA, 5–8 March 2018; pp. 161–165. [Google Scholar]
  2. Ricci, F.; Rokach, L.; Shapira, B. Introduction to recommender systems handbook. In Recommender Systems Handbook; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1–35. [Google Scholar]
  3. Davenport, T.H.; Beck, J.C. The Attention Economy: Understanding the New Currency of Business; Harvard Business Press: Boston, MA, USA, 2001. [Google Scholar]
  4. Vosoughi, S.; Roy, D.; Aral, S. The spread of true and false news online. Science 2018, 359, 1146–1151. [Google Scholar] [CrossRef] [PubMed]
  5. Shalla Secure Services Shalla’s Blacklists. Available online: http://www.shallalist.de (accessed on 27 August 2018).
  6. Wang, K.C.; Lai, C.M.; Wang, T.; Wu, S.F. Bandwagon effect in facebook discussion groups. In Proceedings of the ASE BigData & SocialInformatics 2015, Kaohsiung, Taiwan, 7–9 October 2015; p. 17. [Google Scholar]
  7. Allen, V.L. Situational factors In conformity1. In Advances in Experimental Social Psychology; Elsevier: Amsterdam, The Netherlands, 1965; Volume 2, pp. 133–175. [Google Scholar]
  8. Van Ginneken, B.; Setio, A.A.; Jacobs, C.; Ciompi, F. Off-the-shelf convolutional neural network features for pulmonary nodule detection in computed tomography scans. In Proceedings of the Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposium, New York, NY, USA, 16–19 April 2015; pp. 286–289. [Google Scholar]
  9. Lai, C.M.; Wang, X.; Hong, Y.; Lin, Y.C.; Wu, S.F.; McDaniel, P.; Cam, H. Attacking strategies and temporal analysis involving Facebook discussion groups. In Proceedings of the 2017 13th International Conference on Network and Service Management (CNSM), Tokyo, Japan, 26–30 November 2017; pp. 1–9. [Google Scholar]
  10. Bikhchandani, S.; Hirshleifer, D.; Welch, I. A theory of fads, fashion, custom, and cultural change as informational cascades. J. Political Econ. 1992, 100, 992–1026. [Google Scholar] [CrossRef]
  11. Efron, B.; Tibshirani, R.J. An Introduction to the Bootstrap; CRC Press: Boca Raton, FL, USA, 1994. [Google Scholar]
  12. Helsper, E.J.; Van Deursen, A.J. Do the rich get digitally richer? Quantity and quality of support for digital engagement. Inf. Commun. Soc. 2017, 20, 700–714. [Google Scholar] [CrossRef] [Green Version]
  13. Lee, J.; Hong, I.B. Predicting positive user responses to social media advertising: The roles of emotional appeal, informativeness, and creativity. Int. J. Inf. Manag. 2016, 36, 360–373. [Google Scholar] [CrossRef]
  14. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  15. Castillo, C.; El-Haddad, M.; Pfeffer, J.; Stempeck, M. Characterizing the life cycle of online news stories using social media reactions. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, Baltimore, MD, USA, 15–19 February 2014; pp. 211–223. [Google Scholar]
  16. Cheng, J.; Adamic, L.; Dow, P.A.; Kleinberg, J.M.; Leskovec, J. Can cascades be predicted? In Proceedings of the 23rd International Conference on World Wide Web, Seoul, Korea, 7–11 April 2014; pp. 925–936. [Google Scholar]
  17. Kupavskii, A.; Ostroumova, L.; Umnov, A.; Usachev, S.; Serdyukov, P.; Gusev, G.; Kustarev, A. Prediction of retweet cascade size over time. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, Maui, HI, USA, 29 October–2 November2012; pp. 2335–2338. [Google Scholar]
  18. Ma, Z.; Sun, A.; Cong, G. On predicting the popularity of newly emerging hashtags in twitter. J. Assoc. Inf. Sci. Technol. 2013, 64, 1399–1410. [Google Scholar] [CrossRef]
  19. Tsur, O.; Rappoport, A. What’s in a hashtag? Content based prediction of the spread of ideas in microblogging communities. In Proceedings of the Fifth ACM International Conference on Web Search and Data Mining, Seattle, WA, USA, 8–12 February 2012; pp. 643–652. [Google Scholar]
  20. Marwick, A.E.; Boyd, D. I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media Soc. 2011, 13, 114–133. [Google Scholar] [CrossRef]
  21. Hall, R.T.; White, J.S.; Fields, J. Social Relevance: Toward Understanding the Impact of the Individual in an Information Cascade; Cyber Sensing 2016; International Society for Optics and Photonics: Baltimore, MA, USA, 2016; p. 98260C. [Google Scholar]
  22. Chapanis, A. Interactive human communication. Sci. Am. 1975, 232, 36–46. [Google Scholar] [CrossRef]
  23. Fairchild, C. Building the authentic celebrity: The “Idol” phenomenon in the attention economy. Pop. Music. Soc. 2007, 30, 355–375. [Google Scholar] [CrossRef]
  24. Marwick, A.E. Instafame: Luxury selfies in the attention economy. Public Cult. 2015, 27, 137–160. [Google Scholar] [CrossRef] [Green Version]
  25. Senft, T.M. Camgirls: Celebrity and Community in the Age of Social Networks; Peter Lang: Pieterlen, Switzerland, 2008. [Google Scholar]
  26. Alsaleh, M.; Alarifi, A.; Al-Salman, A.M.; Alfayez, M.; Almuhaysin, A. Tsd: Detecting sybil accounts in twitter. In Proceedings of the 2014 13th International Conference on Machine Learning and Applications (ICMLA), Detroit, MI, USA, 3–6 December 2014; pp. 463–469. [Google Scholar]
  27. Miller, Z.; Dickinson, B.; Deitrick, W.; Hu, W.; Wang, A.H. Twitter spammer detection using data stream clustering. Inf. Sci. 2014, 260, 64–73. [Google Scholar] [CrossRef]
  28. Chatzakou, D.; Kourtellis, N.; Blackburn, J.; De Cristofaro, E.; Stringhini, G.; Vakali, A. Hate is not binary: Studying abusive behavior of gamergate on twitter. In Proceedings of the 28th ACM Conference on Hypertext and Social Media, Prague, Czech Republic, 4–7 July 2017; pp. 65–74. [Google Scholar]
Figure 1. Android apps download links posted to an article about the threat of Ebola.
Figure 1. Android apps download links posted to an article about the threat of Ebola.
Electronics 09 02020 g001
Figure 2. Pornography URL posted to comment stream of article about an LGBT pride parade.
Figure 2. Pornography URL posted to comment stream of article about an LGBT pride parade.
Electronics 09 02020 g002
Figure 3. The framework of users activities regarding online social media.
Figure 3. The framework of users activities regarding online social media.
Electronics 09 02020 g003
Figure 4. Life cycle of two pages regarding IR and elapsed time.
Figure 4. Life cycle of two pages regarding IR and elapsed time.
Electronics 09 02020 g004
Figure 5. CDFs of different categories occurrences plotted against life stages of targeted threads.
Figure 5. CDFs of different categories occurrences plotted against life stages of targeted threads.
Electronics 09 02020 g005
Figure 6. Users general activities on facebook discussion groups.
Figure 6. Users general activities on facebook discussion groups.
Electronics 09 02020 g006
Table 1. Data description.
Table 1. Data description.
Page NameTotal PostsTotal CommentsComments with URLsTotal Reactions
CNN20,92211,882,590412,001 (3.47%)24,174,160
FOX News27,16576,952,2961,026,525 (1.33%)165,285,896
Table 2. URL data description.
Table 2. URL data description.
Page NameURL in WhiteListURL in LightURL in CriticalURL in Benign
CNN194,3723762636213,231
FOX News503,48081251571513,349
Table 3. Bandwagon effect cross validation—target to non-target, observed time = 120 min.
Table 3. Bandwagon effect cross validation—target to non-target, observed time = 120 min.
Page NamePrecision/Predictable (%)Predictable/All (%)
CNN15,633/15,866 (97%)15,866/15,869 (99%)
FOX News17,338/17448 (99%)17,448/17,453 (99%)
Table 4. Bandwagon effect cross validation—non-target to target, observed time = 120 min.
Table 4. Bandwagon effect cross validation—non-target to target, observed time = 120 min.
Page NamePrecision/Predictable (%)Predictable/All (%)
CNN5013/5014 (99%)5014/5053 (99%)
FOX News9706/9712 (99%)9712/9712 (100%)
Table 5. CNN statistics on the cascade size. KS-test for target and non-targets: D = 0.068 , p 0.0 .
Table 5. CNN statistics on the cascade size. KS-test for target and non-targets: D = 0.068 , p 0.0 .
NMeanSEMinMax
Target5053101015581439,929
Non-Target15,869427698135,591
Table 6. FOX statistics on the cascade size. KS-test for target and non-targets: D = 0.066 , p 0.0 .
Table 6. FOX statistics on the cascade size. KS-test for target and non-targets: D = 0.066 , p 0.0 .
NMeanSEMinMax
Target9712471211,74024412,621
Non-Target17,453178634361115,669
Table 7. Influence ratio prediction—Benign to Light, observed time = 60 min, δ T = 1 min.
Table 7. Influence ratio prediction—Benign to Light, observed time = 60 min, δ T = 1 min.
PrecisionRecallF1-ScoreNumber of Samples
CNN
Naive Baynes
Decrease0.860.490.622267
Increase0.530.880.661495
avg/total0.730.640.643762
Adaboost
Decrease0.840.800.822267
Increase0.720.770.741495
avg/total0.790.790.793762
FOX News
Naive Baynes
Decrease0.690.420.523638
Increase0.650.850.744487
avg/total0.670.660.648125
Adaboost
Decrease0.810.690.743638
Increase0.780.870.824487
avg/total0.790.790.798125
Table 8. Influence ratio prediction—Benign to Critical, observed time = 60 min, δ T = 1 min.
Table 8. Influence ratio prediction—Benign to Critical, observed time = 60 min, δ T = 1 min.
PrecisionRecallF1-ScoreNumber of Samples
CNN
Naive Baynes
Decrease0.880.420.57318
Increase0.620.940.75318
avg/total0.750.680.66636
Adaboost
Decrease0.830.710.77318
Increase0.750.860.80318
avg/total0.790.790.79636
FOX News
Naive Baynes
Decrease0.590.480.53581
Increase0.730.810.77990
avg/total0.680.690.681571
Adaboost
Decrease0.750.560.64581
Increase0.780.890.83990
avg/total0.770.770.771571
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lai, C.-M.; Shiu, H.-J.; Chapman, J. Quantifiable Interactivity of Malicious URLs and the Social Media Ecosystem. Electronics 2020, 9, 2020. https://doi.org/10.3390/electronics9122020

AMA Style

Lai C-M, Shiu H-J, Chapman J. Quantifiable Interactivity of Malicious URLs and the Social Media Ecosystem. Electronics. 2020; 9(12):2020. https://doi.org/10.3390/electronics9122020

Chicago/Turabian Style

Lai, Chun-Ming, Hung-Jr Shiu, and Jon Chapman. 2020. "Quantifiable Interactivity of Malicious URLs and the Social Media Ecosystem" Electronics 9, no. 12: 2020. https://doi.org/10.3390/electronics9122020

APA Style

Lai, C. -M., Shiu, H. -J., & Chapman, J. (2020). Quantifiable Interactivity of Malicious URLs and the Social Media Ecosystem. Electronics, 9(12), 2020. https://doi.org/10.3390/electronics9122020

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop