Subscribe or renew today

Every print subscription comes with full digital access

Science News

Social media harms teens’ mental health, mounting evidence shows. what now.

Understanding what is going on in teens’ minds is necessary for targeted policy suggestions

A teen scrolls through social media alone on her phone.

Most teens use social media, often for hours on end. Some social scientists are confident that such use is harming their mental health. Now they want to pinpoint what explains the link.

Carol Yepes/Getty Images

Share this:

By Sujata Gupta

February 20, 2024 at 7:30 am

In January, Mark Zuckerberg, CEO of Facebook’s parent company Meta, appeared at a congressional hearing to answer questions about how social media potentially harms children. Zuckerberg opened by saying: “The existing body of scientific work has not shown a causal link between using social media and young people having worse mental health.”

But many social scientists would disagree with that statement. In recent years, studies have started to show a causal link between teen social media use and reduced well-being or mood disorders, chiefly depression and anxiety.

Ironically, one of the most cited studies into this link focused on Facebook.

Researchers delved into whether the platform’s introduction across college campuses in the mid 2000s increased symptoms associated with depression and anxiety. The answer was a clear yes , says MIT economist Alexey Makarin, a coauthor of the study, which appeared in the November 2022 American Economic Review . “There is still a lot to be explored,” Makarin says, but “[to say] there is no causal evidence that social media causes mental health issues, to that I definitely object.”

The concern, and the studies, come from statistics showing that social media use in teens ages 13 to 17 is now almost ubiquitous. Two-thirds of teens report using TikTok, and some 60 percent of teens report using Instagram or Snapchat, a 2022 survey found. (Only 30 percent said they used Facebook.) Another survey showed that girls, on average, allot roughly 3.4 hours per day to TikTok, Instagram and Facebook, compared with roughly 2.1 hours among boys. At the same time, more teens are showing signs of depression than ever, especially girls ( SN: 6/30/23 ).

As more studies show a strong link between these phenomena, some researchers are starting to shift their attention to possible mechanisms. Why does social media use seem to trigger mental health problems? Why are those effects unevenly distributed among different groups, such as girls or young adults? And can the positives of social media be teased out from the negatives to provide more targeted guidance to teens, their caregivers and policymakers?

“You can’t design good public policy if you don’t know why things are happening,” says Scott Cunningham, an economist at Baylor University in Waco, Texas.

Increasing rigor

Concerns over the effects of social media use in children have been circulating for years, resulting in a massive body of scientific literature. But those mostly correlational studies could not show if teen social media use was harming mental health or if teens with mental health problems were using more social media.

Moreover, the findings from such studies were often inconclusive, or the effects on mental health so small as to be inconsequential. In one study that received considerable media attention, psychologists Amy Orben and Andrew Przybylski combined data from three surveys to see if they could find a link between technology use, including social media, and reduced well-being. The duo gauged the well-being of over 355,000 teenagers by focusing on questions around depression, suicidal thinking and self-esteem.

Digital technology use was associated with a slight decrease in adolescent well-being , Orben, now of the University of Cambridge, and Przybylski, of the University of Oxford, reported in 2019 in Nature Human Behaviour . But the duo downplayed that finding, noting that researchers have observed similar drops in adolescent well-being associated with drinking milk, going to the movies or eating potatoes.

Holes have begun to appear in that narrative thanks to newer, more rigorous studies.

In one longitudinal study, researchers — including Orben and Przybylski — used survey data on social media use and well-being from over 17,400 teens and young adults to look at how individuals’ responses to a question gauging life satisfaction changed between 2011 and 2018. And they dug into how the responses varied by gender, age and time spent on social media.

Social media use was associated with a drop in well-being among teens during certain developmental periods, chiefly puberty and young adulthood, the team reported in 2022 in Nature Communications . That translated to lower well-being scores around ages 11 to 13 for girls and ages 14 to 15 for boys. Both groups also reported a drop in well-being around age 19. Moreover, among the older teens, the team found evidence for the Goldilocks Hypothesis: the idea that both too much and too little time spent on social media can harm mental health.

“There’s hardly any effect if you look over everybody. But if you look at specific age groups, at particularly what [Orben] calls ‘windows of sensitivity’ … you see these clear effects,” says L.J. Shrum, a consumer psychologist at HEC Paris who was not involved with this research. His review of studies related to teen social media use and mental health is forthcoming in the Journal of the Association for Consumer Research.

Cause and effect

That longitudinal study hints at causation, researchers say. But one of the clearest ways to pin down cause and effect is through natural or quasi-experiments. For these in-the-wild experiments, researchers must identify situations where the rollout of a societal “treatment” is staggered across space and time. They can then compare outcomes among members of the group who received the treatment to those still in the queue — the control group.

That was the approach Makarin and his team used in their study of Facebook. The researchers homed in on the staggered rollout of Facebook across 775 college campuses from 2004 to 2006. They combined that rollout data with student responses to the National College Health Assessment, a widely used survey of college students’ mental and physical health.

The team then sought to understand if those survey questions captured diagnosable mental health problems. Specifically, they had roughly 500 undergraduate students respond to questions both in the National College Health Assessment and in validated screening tools for depression and anxiety. They found that mental health scores on the assessment predicted scores on the screenings. That suggested that a drop in well-being on the college survey was a good proxy for a corresponding increase in diagnosable mental health disorders. 

Compared with campuses that had not yet gained access to Facebook, college campuses with Facebook experienced a 2 percentage point increase in the number of students who met the diagnostic criteria for anxiety or depression, the team found.

When it comes to showing a causal link between social media use in teens and worse mental health, “that study really is the crown jewel right now,” says Cunningham, who was not involved in that research.

A need for nuance

The social media landscape today is vastly different than the landscape of 20 years ago. Facebook is now optimized for maximum addiction, Shrum says, and other newer platforms, such as Snapchat, Instagram and TikTok, have since copied and built on those features. Paired with the ubiquity of social media in general, the negative effects on mental health may well be larger now.

Moreover, social media research tends to focus on young adults — an easier cohort to study than minors. That needs to change, Cunningham says. “Most of us are worried about our high school kids and younger.” 

And so, researchers must pivot accordingly. Crucially, simple comparisons of social media users and nonusers no longer make sense. As Orben and Przybylski’s 2022 work suggested, a teen not on social media might well feel worse than one who briefly logs on. 

Researchers must also dig into why, and under what circumstances, social media use can harm mental health, Cunningham says. Explanations for this link abound. For instance, social media is thought to crowd out other activities or increase people’s likelihood of comparing themselves unfavorably with others. But big data studies, with their reliance on existing surveys and statistical analyses, cannot address those deeper questions. “These kinds of papers, there’s nothing you can really ask … to find these plausible mechanisms,” Cunningham says.

One ongoing effort to understand social media use from this more nuanced vantage point is the SMART Schools project out of the University of Birmingham in England. Pedagogical expert Victoria Goodyear and her team are comparing mental and physical health outcomes among children who attend schools that have restricted cell phone use to those attending schools without such a policy. The researchers described the protocol of that study of 30 schools and over 1,000 students in the July BMJ Open.

Goodyear and colleagues are also combining that natural experiment with qualitative research. They met with 36 five-person focus groups each consisting of all students, all parents or all educators at six of those schools. The team hopes to learn how students use their phones during the day, how usage practices make students feel, and what the various parties think of restrictions on cell phone use during the school day.

Talking to teens and those in their orbit is the best way to get at the mechanisms by which social media influences well-being — for better or worse, Goodyear says. Moving beyond big data to this more personal approach, however, takes considerable time and effort. “Social media has increased in pace and momentum very, very quickly,” she says. “And research takes a long time to catch up with that process.”

Until that catch-up occurs, though, researchers cannot dole out much advice. “What guidance could we provide to young people, parents and schools to help maintain the positives of social media use?” Goodyear asks. “There’s not concrete evidence yet.”

More Stories from Science News on Science & Society

Aimee Grant is sitting on a wheelchair against a white wall. She has a short, purple hair and wearing glasses, a necklace and a black short-sleeve dress with white flower pattern. She also has tattoos on her right arm.

Aimee Grant investigates the needs of autistic people

A photograph of four silhouetted people standing in front of a warm toned abstract piece of artwork that featured tones of yellow, red, orange and pink swirls.

In ‘Get the Picture,’ science helps explore the meaning of art

write a speech about uses and abuses of social media

What  Science News  saw during the solar eclipse

total solar eclipse April 2024

​​During the awe of totality, scientists studied our planet’s reactions

large eclipse glasses

Your last-minute guide to the 2024 total solar eclipse

A photograph of Oluwatoyin Asojo who's faintly smiling while standing in an empty white hallway by large panels of windows. She is wearing a dress with black, white, brown and red geometric patterns, black coat, black and brown knee-high boots, green scarf with patterns, and brown and orange necklace.

Protein whisperer Oluwatoyin Asojo fights neglected diseases

A chromolithograph of the sun during the total solar eclipse in 1878

How a 19th century astronomer can help you watch the total solar eclipse

write a speech about uses and abuses of social media

Timbre can affect what harmony is music to our ears

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

  • Skip to main content
  • Keyboard shortcuts for audio player

Supreme Court tackles social media and free speech

Nina Totenberg at NPR headquarters in Washington, D.C., May 21, 2019. (photo by Allison Shelley)

Nina Totenberg

In a major First Amendment case, the Supreme Court heard arguments on the federal government's ability to combat what it sees as false, misleading or dangerous information online.

ARI SHAPIRO, HOST:

At the Supreme Court today, a majority of the justices seemed highly skeptical of claims that federal officials may be broadly barred from contacts with social media platforms. At issue was a sweeping 5th Circuit Court of Appeals decision. That ruling blocked officials from the White House, the FBI, the CDC and other agencies from asking social media companies to remove certain content. NPR legal affairs correspondent Nina Totenberg reports.

NINA TOTENBERG, BYLINE: Five individuals and two Republican-dominated states claim that the government is violating the First Amendment by systematically pressuring social media companies to take down what the government sees as false and misleading information. The Biden administration counters that White House and agency officials are well within their rights to persuade social media companies about what they see as erroneous information about COVID-19 or foreign interference in an election or even election information about where to vote. Two justices who once worked in the White House - Brett Kavanaugh, a Trump appointee, and Elena Kagan, an Obama appointee - were the most outspoken about the long history of government contacts with media companies. Here's Kavanaugh.

(SOUNDBITE OF ARCHIVED RECORDING)

BRETT KAVANAUGH: I've experienced government press people throughout the federal government who regularly call up the media and berate them.

TOTENBERG: Justice Kagan echoed that sentiment.

ELENA KAGAN: Like Justice Kavanaugh, I've had some experience encouraging press...

KAGAN: ...To suppress their own speech. You just wrote a story that's filled with factual errors. Here are the 10 reasons why you shouldn't do that again. I mean, this happens literally thousands of times a day in the federal government.

TOTENBERG: She and Justice Barrett postulated that the FBI might contact social media companies to tell them that while they might not realize it, they've been posting information from a terrorist group aimed at secret recruitment. Louisiana's solicitor general, Benjamin Aguinaga, argued that when government officials contact social media companies, even encouraging, amounts to unconstitutional pressuring. That prompted this from Justice Barrett.

BENJAMIN AGUINAGA: I mean...

AMY CONEY BARRETT: Just plain, vanilla encouragement, or does it have to be some kind of, like, significant encouragement? - because encouragement would sweep in an awful lot.

TOTENBERG: Aguinaga, however, didn't have a clear line of differentiation, except to claim that pressuring print and other media outlets is different from pressuring social media platforms. What about publishing classified information, asked Justice Kavanaugh. Are you suggesting the government can't try to get that taken down? Or what about factual inaccuracies? Justice Jackson asked about matters of public safety. What if young people were being injured or killed, carrying out a new online fad that called for jumping out of windows? Couldn't the government legitimately ask platforms to take that down? When the Louisiana solicitor general fudged, Chief Justice Roberts followed up.

JOHN ROBERTS: Under my colleague's hypothetical, it was not necessarily eliminate viewpoints. It was to eliminate some game that is seriously harming children around the country. And they say, we encourage you to stop that.

AGUINAGA: Your honor, I agree. As a policy matter, it might be great for the government to be able to do that. But the moment that the government identifies an entire category of content that it wishes to not be in the modern public sphere, that is a First Amendment problem.

TOTENBERG: Several justices questioned the record in the case. Justice Kagan said she did not see even one item that supported barring government contacts. Justice Sotomayor put it this way.

SONIA SOTOMAYOR: I have such a problem with your brief, Counselor. You omit information that changes the context of some of your claims. You attribute things to people who it didn't happen to. I'm not sure how we get to prove direct injury in any way.

TOTENBERG: Representing the Biden administration today, Deputy Solicitor General Brian Fletcher took incoming fire, mainly from Justices Alito and Thomas. But he stuck to his contention that when the government seeks to persuade a social media platform to take down a post, that is an attempt at persuasion not coercion. Unlike some of his conservative colleagues, Justice Alito was skeptical of all aspects of the government's argument.

SAMUEL ALITO: There is constant pestering of Facebook and some of the other platforms, and they want to have regular meetings. They suggest rules that should be applied. And I thought, wow, I cannot imagine federal officials taking that approach to the print media.

TOTENBERG: Nina Totenberg, NPR News, Washington.

Copyright © 2024 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • The State of Online Harassment

Roughly four-in-ten Americans have experienced online harassment, with half of this group citing politics as the reason they think they were targeted. Growing shares face more severe online abuse such as sexual harassment or stalking

Table of contents.

  • 1. Personal experiences with online harassment
  • 2. Characterizing people’s most recent online harassment experience
  • 3. Americans’ views on how online harassment should be addressed
  • Acknowledgments
  • Methodology

Pew Research Center has a history of studying online harassment. This report focuses on American adults’ experiences and attitudes related to online harassment. For this analysis, we surveyed 10,093 U.S. adults from Sept. 8 to 13, 2020. Everyone who took part is a member of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the  ATP’s methodology . Here are the  questions used for this report , along with responses, and its methodology .

Stories about online harassment have captured headlines for years. Beyond the more severe cases of sustained , aggressive abuse that make the news, name-calling and belittling, derisive comments have come to characterize how many view discourse online – especially in the political realm.

Compared with 2017, similar share of Americans have experienced any type of online harassment – but more severe encounters have become more common

A Pew Research Center survey of U.S. adults in September finds that 41% of Americans have personally experienced some form of online harassment in at least one of the six key ways that were measured. And while the overall prevalence of this type of abuse is the same as it was in 2017, there is evidence that online harassment has intensified since then.

To begin with, growing shares of Americans report experiencing more severe forms of harassment, which encompasses physical threats, stalking, sexual harassment and sustained harassment. Some 15% experienced such problems in 2014 and a slightly larger share (18%) said the same in 2017. 1 That group has risen to 25% today. Additionally, those who have been the target of online abuse are more likely today than in 2017 to report that their most recent experience involved more varied types and more severe forms of online abuse.

In a political environment where Americans are stressed and frustrated and antipathy has grown , online venues often serve as platforms for highly contentious or even extremely offensive political debate. And for those who have experienced online abuse, politics is cited as the top reason for why they think they were targeted.

Defining online harassment

This report measures online harassment using six distinct behaviors:

  • Offensive name-calling
  • Purposeful embarrassment
  • Physical threats
  • Harassment over a sustained period of time
  • Sexual harassment

Respondents who indicate they have personally experienced any of these behaviors online are considered targets of online harassment in this report. Further, this report distinguishes between “more severe” and “less severe” forms of online harassment. Those who have only experienced name-calling or efforts to embarrass them are categorized in the “less severe” group, while those who have experienced any stalking, physical threats, sustained harassment or sexual harassment are categorized in the “more severe” group.

Indeed, 20% of Americans overall – representing half of those who have been harassed online – say they have experienced online harassment because of their political views. This is a notable increase from three years ago, when 14% of all Americans said they had been targeted for this reason. Beyond politics, more also cite their gender or their racial and ethnic background as reasons why they believe they were harassed online.

While these kinds of negative encounters may occur anywhere online, social media is by far the most common venue cited for harassment – a pattern consistent across the Center’s work over the years on this topic. The latest survey finds that 75% of targets of online abuse – equaling 31% of Americans overall – say their most recent experience was on social media.

As online harassment permeates social media, the public is highly critical of the way these companies are tackling the issue. Fully 79% say social media companies are doing an only fair or poor job at addressing online harassment or bullying on their platforms.

But even as social media companies receive low ratings for handling abuse on their sites, a minority of Americans back the idea of holding these platforms legally responsible for harassment that happens on their sites. Just 33% of Americans say that people who have experienced harassment or bullying on social media sites should be able to sue the platforms on which it occurred.

These are some of the key findings from a nationally representative survey of 10,093 U.S. adults conducted online Sept. 8 to 13, 2020, using Pew Research Center’s American Trends Panel . The following are among the major findings.

41% of U.S. adults have personally experienced online harassment, and 25% have experienced more severe harassment

Majority say online harassment is a major problem; 41% have personally experienced this, with more than half of this group experiencing more severe behaviors

On a broad level, Americans agree that online harassment is a problem plaguing digital spaces. Roughly nine-in-ten Americans say people being harassed or bullied online is a problem, including 55% who consider it a major problem.

Many Americans have also had their own experience with being targeted online. While about four-in-ten Americans (41%) have experienced some form of online harassment, growing shares have faced more severe and multiple forms of harassment. For example, in 2014, 15% of Americans said they had been subjected to more severe forms of online harassment. That share is now 25%. There has also been a double-digit increase in those experiencing multiple types of online abuse – rising from 16% to 28% since 2014. This number is also up since 2017, when 19% of Americans had experienced multiple forms of harassing behaviors online.

Many individual types of behaviors are on the rise as well. The shares of Americans who say they have been called an offensive name, purposefully embarrassed or physically threatened while online have all risen since 2014. However, the share who have experienced any of the less severe behaviors is largely on par with that of 2017 (37% in 2020 vs. 36% in 2017).

A majority of younger adults have encountered harassment online

Roughly two-thirds of adults under 30 have been harassed online

Online harassment is a particularly common feature of online life for younger adults, and they are especially prone to facing harassing behaviors that are more serious. Roughly two-thirds of adults under 30 (64%) have experienced any form of the online harassment activities measured in this survey – making this the only age group in which a majority have been subjected to these behaviors. Still, about half of 30- to 49-year-olds have been the target of online harassment, while smaller shares of those ages 50 and older (26%) have encountered at least one of these harassing activities.

A similar pattern is present when looking at those who have faced more severe forms of online abuse: 48% of 18- to 29-year-olds have been targeted online with more severe behaviors, compared with 32% of those ages 30 to 49 and just 12% of those 50 and older.

Gender also plays a role in the types of harassment people are likely to encounter online. Overall, men are somewhat more likely than women to say they have experienced any form of harassment online (43% vs. 38%), but similar shares of men and women have faced more severe forms of this kind of abuse. There are also differences across individual types of online harassment in the types of negative incidents they have personally encountered online. Some 35% of men say they have been called an offensive name versus 26% of women, and being physically threatened online is more common occurrence for men rather than women (16% vs. 11%).

Women, on the other hand, are more likely than men to report having been sexually harassed online (16% vs. 5%) or stalked (13% vs. 9%). Young women are particularly likely to have experienced sexual harassment online. Fully 33% of women under 35 say they have been sexually harassed online, while 11% of men under 35 say the same.

Lesbian, gay or bisexual adults are particularly likely to face harassment online. Roughly seven-in-ten have encountered any harassment online and fully 51% have been targeted for more severe forms of online abuse. By comparison, about four-in-ten straight adults have endured any form of harassment online, and only 23% have undergone any of the more severe behaviors.

Women targeted in online harassment are more than twice as likely as men to say most recent incident was very or extremely upsetting

While men are somewhat more likely than women to experience harassment online, women are more likely to be upset about it and think it is a major problem. Some 61% of women say online harassment is a major problem, while 48% of men agree. In addition, women who have been harassed online are more than twice as likely as men to say they were extremely or very upset by their most recent encounter (34% vs. 14%). Conversely, 61% of men who have been harassed online say they were not at all or a little upset by their most recent incident, while 36% of women said the same. Overall, 24% of those who have experienced online harassment say that their most recent incident was extremely (10%) or very (14%) upsetting.

One-in-five adults report being harassed online for their political views

Growing share of Americans who’ve been harassed online cite their political views as a reason why they think they were targeted

Those who have been harassed were then asked whether they believed certain personal characteristics – political views, gender, race or ethnicity, religion or sexual orientation – played a role in the attacks. Fully 20% of all adults – or 50% of online harassment targets – say they have been harassed online because of their political views. At the same time, 14% of U.S. adults (33% of people who have been harassed online) say they have been harassed based on their gender, while 12% say this occurred because of their race or ethnicity (29% of online harassment targets). Smaller shares point to their religion or their sexual orientation as a reason for their harassment.

Each of these reasons has risen since the Center last asked these questions in 2017. There have been 6 percentage point increases in the shares of Americans attributing their harassment to their political views as well as gender. Race or ethnicity, sexual orientation and religion each saw a modest rise since 2017.

There are several demographic differences regarding who has been harassed online for their gender or their race or ethnicity. Among adults who have been harassed online, roughly half of women (47%) say they think they have encountered harassment online because of their gender, whereas 18% of men who have been harassed online say the same. Similarly, about half or more Black (54%) or Hispanic online harassment targets (47%) say they were harassed due to their race or ethnicity, compared with 17% of White targets.

Black, Hispanic targets of online harassment more likely than their White counterparts to say they’ve been harassed online because of their race, ethnicity

While small shares overall say their harassment was due to their sexual orientation, 50% of lesbian, gay or bisexual adults who have been harassed online say they think it occurred because of their sexual orientation. 2 By comparison, only 12% of straight online harassment targets say the same. Lesbian, gay or bisexual online harassment targets are also more likely to report having encountered harassment online because of their gender (54%) compared with their straight counterparts (31%).

Men and White adults who have been harassed online are particularly likely to say this harassment was a result of their political views. Harassed men are a full 15 percentage points more likely than their female counterparts to cite political views as the reason they were harassed online (57% vs. 42%). Similarly, White online harassment targets are 18 points more likely than Black or Hispanic targets to point to their political views as the reason they were targeted for abuse online.

And while there are some partisan differences in citing political views as the perceived catalyst for facing harassment, these differences do not hold when accounting for race and ethnicity. For example, White Democrats and Republicans, including independents who lean toward each respective party, who have been harassed are about equally likely to say their political views were the reason they were harassed (55% vs. 57%).

Most online harassment targets say their most recent experience occurred on social media

Majority of people who’ve been harassed online say the most recent experience occurred on social media

As was true in previous Center surveys about online harassment, social media continue to be the most commonly cited online venues where harassment takes place. When asked where their most recent experience with online harassment occurred, 75% of targets of this type of abuse say it happened on social media.

By comparison, much smaller shares of this group mention online forums or discussion sites (25%) or texting or messaging apps (24%) as the location where their most recent experience occurred, while about one-in-ten or more cite online gaming, their personal email account or a dating site or app. In total, 41% of targets of online harassment say their most recent experience of harassment spanned more than one venue.

While social media are the most commonly cited online spaces for both men and women to say they have been harassed, women who have been harassed online are more likely than men to say their most recent experience was on social media (a 13 percentage point gap). On the other hand, men are more likely than women to report their most recent experience occurred while they were using an online forum or discussion site or while online gaming (both with a 13-point gap).

Most Americans are critical of how social media companies address online harassment; only a minority say users should be able to hold sites legally responsible

While most Americans feel that harassment and bullying are a problem online, the way to address this issue remains up for debate. The policies used to combat harassment and the transparency in reporting how content is being moderated vary drastically across online platforms. Social media companies have been highly criticized for their current tactics in addressing harassment, with advocates saying these companies should be doing more.

A majority say social media companies are doing an only fair or poor job addressing online harassment

The public is similarly critical of social media companies. When asked to rate how well these companies are addressing online harassment or bullying on their platforms, just 18% say social media companies are doing an excellent or good job. Much larger shares – roughly eight-in-ten – say these companies are doing an only fair or poor job.

Despite most Americans being critical of the job social media companies are doing to address harassment, some are optimistic about a variety of possible solutions asked about in the survey that could be enacted to combat online harassment.

About half of Americans say permanently suspending users if they bully or harass others (51%) or requiring users of these platforms to disclose their real identities (48%) would be very effective in helping to reduce harassment or bullying on social media.

Around four-in-ten say criminal charges for users who bully or harass (43%) or social media companies proactively deleting bullying or harassing posts (40%) would be very effective.

Temporary bans are deemed the least effective solution about which respondents were asked. A third (32%) of Americans say users getting temporarily suspended if they bully or harass others would be a very effective measure against harassment. When it comes to holding social media companies accountable for the harassment on their platforms, few think personal lawsuits should be the solution. A third of adults say people who have been bullied or harassed by others on social media should be able to sue the platforms where the harassment occurred, whereas a much larger share – 63% – believe targets of online abuse should not be able to bring legal action against social media sites.

  • The 2014 data was reweighted to be comparable to the data collected in 2017. See the 2017 report’s methodology for more information about how this was done. ↩
  • Because of the relatively small sample size and a reduction in precision due to weighting, we are not able to analyze lesbian, gay or bisexual respondents by demographic categories such as gender, age or education. ↩

Sign up for The Briefing

Weekly updates on the world of news & information

Most Popular

Report materials.

  • American Trends Panel Wave 74

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

  • Publications

Uses and Abuses of Social Media

Lisa Garbe  ( WZB - Berlin Social Science Center),  Marc Owen Jones  (Hamad bin Khalifa University),  David Herbert  (UiB) and  Lovise Aalen  (CMI)

Social media have been hailed as the ultimate democratic tool, enabling users to self-organise and build communities, sometimes even contributing to the fall of dictatorships, as during the Arab Spring. But can social media also reinforce existing power relationships? What happens when access to social media is shut down? When social media are manipulated, so that only one story is shared? 

write a speech about uses and abuses of social media

Lovise Aalen

Recent CMI publications:

Refugees find employment in very different settlement contexts.

Tiit Tammaru, Kadi Kalm, Anneli Kährik, Alis Tammur

write a speech about uses and abuses of social media

IS-terror utan ende?

Arne Strand

The role of trust and norms in tax compliance in Africa

Odd-Helge Fjeldstad and Ingrid Hoem Sjursen

Human Development Report 2023-2024 "Breaking the gridlock: Reimagining cooperation in a polarized world"

Unsettling expectations of stay: probationary immigration policies in Canada and Norway

Jessica Schultz

Comparative Migration Studies

write a speech about uses and abuses of social media

The Sudan war: The potential of civil and democratic forces

Munzoul Assal

write a speech about uses and abuses of social media

‘There is No Compulsion in Marriage’. Divorce and Gendered Change in Afghanistan during the Islamic Republic

Torunn Wimpelmann, Masooma Saadat

British Journal of Middle Eastern Studies

Oceanic geographies and maritime heritage in the making: Producing history, memory and territory. International Journal of Heritage Studies (forthcoming)

Edyta Roszko and Tim Winter

Introduction: theorising heritage for the seas

International Journal of Heritage Studies

Research and Advisory Work on Taxation and Public Finance Management in Tanzania 1993-2023

Odd-Helge Fjeldstad

Heritagising the South China Sea: appropriation and dispossession of maritime heritage through museums and exhibitions in Southern China

Edyta Roszko

Courts and Transitional Justice

Oxford Handbook of Comparative Judicial Behaviour

Uncertainty at the needle point: Vaccine hesitancy, trust, and public health communication in Norway during swine flu and COVID-19

Karine Aasgaard Jansen

Vaccine hesitancy in the Nordic countries. Trust and distrust during the COVID-19 pandemic. Editors: Lars Borin, Mia-Marie Hammarlin, Dimitrios Kokkinakis, Fredrik Miegel.

  • All Stories
  • Journalists
  • Expert Advisories
  • Media Contacts
  • X (Twitter)
  • Arts & Culture
  • Business & Economy
  • Education & Society
  • Environment
  • Law & Politics
  • Science & Technology
  • International
  • Michigan Minds Podcast
  • Michigan Stories
  • 2024 Elections
  • Artificial Intelligence
  • Abortion Access
  • Mental Health

Hate speech in social media: How platforms can do better

  • Morgan Sherburne

With all of the resources, power and influence they possess, social media platforms could and should do more to detect hate speech, says a University of Michigan researcher.

Libby Hemphill

Libby Hemphill

In a report from the Anti-Defamation League , Libby Hemphill, an associate research professor at U-M’s Institute for Social Research and an ADL Belfer Fellow, explores social media platforms’ shortcomings when it comes to white supremacist speech and how it differs from general or nonextremist speech, and recommends ways to improve automated hate speech identification methods.

“We also sought to determine whether and how white supremacists adapt their speech to avoid detection,” said Hemphill, who is also a professor at U-M’s School of Information. “We found that platforms often miss discussions of conspiracy theories about white genocide and Jewish power and malicious grievances against Jews and people of color. Platforms also let decorous but defamatory speech persist.”

How platforms can do better

White supremacist speech is readily detectable, Hemphill says, detailing the ways it is distinguishable from commonplace speech in social media, including:

  • Frequently referencing racial and ethnic groups using plural noun forms (whites, etc.)
  • Appending “white” to otherwise unmarked terms (e.g., power)
  • Using less profanity than is common in social media to elude detection based on “offensive” language
  • Being congruent on both extremist and mainstream platforms
  • Keeping complaints and messaging consistent from year to year
  • Describing Jews in racial, rather than religious, terms

“Given the identifiable linguistic markers and consistency across platforms, social media companies should be able to recognize white supremacist speech and distinguish it from general, nontoxic speech,” Hemphill said.

The research team used commonly available computing resources, existing algorithms from machine learning and dynamic topic modeling to conduct the study.

“We needed data from both extremist and mainstream platforms,” said Hemphill, noting that mainstream user data comes from Reddit and extremist website user data comes from Stormfront.

What should happen next?

Even though the research team found that white supremacist speech is indentifiable and consistent—with more sophisticated computing capabilities and additional data—social media platforms still miss a lot and struggle to distinguish nonprofane, hateful speech from profane, innocuous speech.

“Leveraging more specific training datasets, and reducing their emphasis on profanity can improve platforms’ performance,” Hemphill said.

The report recommends that social media platforms: 1) enforce their own rules; 2) use data from extremist sites to create detection models; 3) look for specific linguistic markers; 4) deemphasize profanity in toxicity detection; and 5) train moderators and algorithms to recognize that white supremacists’ conversations are dangerous and hateful.

“Social media platforms can enable social support, political dialogue and productive collective action. But the companies behind them have civic responsibilities to combat abuse and prevent hateful users and groups from harming others,” Hemphill said. “We hope these findings and recommendations help platforms fulfill these responsibilities now and in the future.”

More information:

  • Report: Very Fine People: What Social Media Platforms Miss About White Supremacist Speech
  • Related: Video: ISR Insights Speaker Series: Detecting white supremacist speech on social media
  • Podcast: Data Brunch Live! Extremism in Social Media

University of Michigan Logo

412 Maynard St. Ann Arbor, MI 48109-1399 Email [email protected] Phone 734-764-7260 About Michigan News

  • Engaged Michigan
  • Global Michigan
  • Michigan Medicine
  • Public Affairs

Publications

  • Michigan Today
  • The University Record

Office of the Vice President for Communications © 2024 The Regents of the University of Michigan

Regulating free speech on social media is dangerous and futile

Subscribe to the center for technology innovation newsletter, niam yaraghi niam yaraghi nonresident senior fellow - governance studies , center for technology innovation @niamyaraghi.

September 21, 2018

Amid recent news about Google’s post 2016 elections meeting , multiple Congressional hearings , and attacks by President Trump , social media platforms and technology companies are facing unprecedented criticism from both parties. According to Gallup’s survey , 79 percent of Americans believe that these companies should be regulated.

We know that an overwhelming majority of technology entrepreneurs subscribe to a liberal ideology . Despite the claims by companies such as Google , I believe that political biases affect how these companies operate. As my colleague Nicol Turner-Lee explains here , “while computer programmers may not create algorithms that start out being discriminatory, the collection and curation of social preferences eventually can become adaptive algorithms that embrace societal biases.” If we accept that the implicit bias of developers could unintentionally lead their algorithms to be discriminatory, then, with the same token, we should also expect the political biases of such programmers to lead to discriminatory algorithms that favor their ideology.

Empirical evidence support this intuition; By analyzing a dataset consisting of 10.1 million U.S. Facebook users, a 2014 study demonstrated that liberal users are less likely than their conservative counterparts to get exposed to news content that oppose their political views. Another analysis of Yahoo! search queries concluded that “more right-leaning a query it is, the more negative sentiments can be found in its search results.”

The First Amendment restricts government censorship

The calls for regulating social media and technology companies are politically motivated. Conservatives who support these policies argue that their freedom of speech is being undermined by social media companies who censor their voice. Conservatives who celebrate constitutional originalism should remember that the First Amendment protects against censorship by government. Social media companies are all private businesses with discretion over the content they wish to promote, and any effort by government to influence what social media platforms promote risks violating the First Amendment.

Moreover, the current position of the conservatives are in direct contrast to their positions on “Fairness Doctrine”. As my colleague Tom Wheeler explains here , “when the Fairness Doctrine was repealed in the Reagan Administration, it was hailed by Republicans as a victory for free speech.” Republicans should apply the same standard to both traditional media and the modern day social media. If they believe requiring TV and radio channels to present a fair balance of both sides is a violation of free speech, how can they favor imposing the exact same requirement on social media platforms?

Furthermore, the government intervention that they propose is potentially more damaging than the problem they want to solve. If conservatives believe that certain businesses have enough power and influence to infringe on their freedom of speech, how can they propose government, a much more powerful and influential entity, to enter this space? While President Trump’s administration and a Republican controlled Congress may set policies that would favor conservatives in the short term, they will also be setting a very dangerous precedent which would allow later governments to interfere with these companies and other news organizations in future. If they believe that today’s Twitter has enough power and will to censor them, they should be terrified of allowing tomorrow’s government to do so.

Breaking UP social media Companies does not help consumers

The second argument that supporters of regulating social media companies make is that these companies have created monopolies and therefore antitrust laws should be used to break them down and allow smaller competitors to emerge. While it is true that these companies have created very large monopolies, we should not neglect the unique nature of social media in which users will benefit the most only if they are a member of a dominant platform. The value of a platform for its users grows with the number of other users. After all, what is the use of Facebook if your friends are not there?

If conservatives genuinely believe in the value of competition and free choice, and at the same time believes that a more conservative social media platform would be of value to consumers, they should start a new platform rather than demanding the existing private platforms to become more inclusive of conservative ideas. Just like cable news channels are built to promote ideologies of a particular political party, social media platforms could also be built to promote conservative values.

Mandating ideological diversity is impossible

Others argue that social media and technology companies should become more ideologically diverse and inclusive by hiring more conservatives. I believe in the value of ideological and intellectual diversity. As an academic, I experience it on a daily basis through my interaction with students and colleagues from many different backgrounds. This helps me polish my ideas and create new and exciting ones. New ideas are more likely to emerge and flourish in an intellectually diverse environment.

However, measuring and mandating ideological diversity is impossible. Ideology is a spectrum, not binary. Rarely anyone agrees with all positions of a single party even if they are a member of it. Although in an extremely polarized political environment, Americans are increasingly favoring the more extreme ends of the political ideologies in both parties, many of the Republicans do not agree with current immigration policies of President Trump, just like many Democrats who do not agree that ICE should be abolished.  Unlike other forms of diversity that promote gender, racial, and sexual equality in the work force, political ideology cannot be categorized within a limited number of groups. While we can look at the racial composition of the employees of a company and demand that they hire a representative sample of all races, it is not possible to demand for a representative sample of political ideologies in the workforce.

Acting to increase ideological diversity would be impossible. A candidate would hesitate to disclose party affiliation to an employer who may use it to make hiring decisions. What are the chances that a candidate tries to conceal a conservative ideology during an interview for a six-figure-salary job in an overtly liberal Silicon Valley company? If another company wants to become more diverse by hiring conservatives, would liberal candidates be inclined to present as conservative?

The political bias of social media companies becomes more concerning as more Americans turn to these platforms for receiving news and effectively turn them into news organizations. Despite these concerns, I believe that we should accept such bias as a fact and refrain from regulating social media platforms or mandating them to attain a politically diverse workforce.

Facebook and Google are donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and not influenced by any donation.

Related Content

Mark MacCarthy

April 9, 2021

John Villasenor

October 27, 2022

Bill Baer, Caitlin Chin-Rothmann

June 1, 2021

Related Books

Bruce L.R. Smith, Jeremy D. Mayer, A. Lee Fritschler

August 11, 2008

James A. Reichley

June 1, 1981

Stephen Hess

March 28, 2017

Social Media Technology Policy & Regulation

Governance Studies

Center for Technology Innovation

Online Only

2:00 pm - 3:00 pm EDT

Nicol Turner Lee

March 28, 2024

Jacob Larson, James S. Denford, Gregory S. Dawson, Kevin C. Desouza

March 26, 2024

  • Skip to content
  • Skip to navigation

Stanford University

Header Menu

SystemX Alliance

Search form

You are here, why ai struggles to recognize toxic speech on social media.

write a speech about uses and abuses of social media

Automated speech police can score highly on technical tests but miss the mark with people, new research shows. 

Facebook says its artificial intelligence models identified and  pulled down 27 million pieces of hate speech in the final three months of 2020 . In 97 percent of the cases, the systems took action before humans had even flagged the posts.

That’s a huge advance, and all the other major social media platforms are using AI-powered systems in similar ways. Given that people post hundreds of millions of items every day, from comments and memes to articles, there’s no real alternative. No army of human moderators could keep up on its own.

But a team of human-computer interaction and AI researchers at Stanford sheds new light on why automated speech police can score highly accurately on technical tests yet  provoke a lot dissatisfaction from humans with their decisions.  The main problem: There is a huge difference between evaluating more traditional AI tasks, like recognizing spoken language, and the much messier task of identifying hate speech, harassment, or misinformation — especially in today’s polarized environment.

Read the study:  The Disagreement Deconvolution: Bringing Machine Learning Performance Metrics In Line With Reality

“It appears as if the models are getting almost perfect scores, so some people think they can use them as a sort of black box to test for toxicity,’’ says Mitchell Gordon, a PhD candidate in computer science who worked on the project. “But that’s not the case. They’re evaluating these models with approaches that work well when the answers are fairly clear, like recognizing whether ‘java’ means coffee or the computer language, but these are tasks where the answers are not clear.”

The team hopes their study will illuminate the gulf between what developers think they’re achieving and the reality — and perhaps help them develop systems that grapple more thoughtfully with the inherent disagreements around toxic speech.

Too Much Disagreement

There are no simple solutions, because there will never be unanimous agreement on highly contested issues. Making matters more complicated, people are often ambivalent and inconsistent about how they react to a particular piece of content.

In one study, for example,  human annotators rarely reached agreement  when they were asked to label tweets that contained words from a lexicon of hate speech. Only 5 percent of the tweets were acknowledged by a majority as hate speech, while only 1.3 percent received unanimous verdicts.  In a study  on recognizing misinformation, in which people were given statements about purportedly true events, only 70 percent agreed on whether most of the events had or had not occurred.

Despite this challenge for human moderators, conventional AI models achieve high scores on recognizing toxic speech —  .95 “ROCAUC” — a popular metric for evaluating AI models in which 0.5 means pure guessing and 1.0 means perfect performance. But the Stanford team found that the real score is much lower — at most .73 — if you factor in the disagreement among human annotators.

Reassessing the Models

In a new study,  the Stanford team re-assesses the performance of today’s AI models by getting a more accurate measure of what people truly believe and how much they disagree among themselves.

The study was overseen by  Michael Bernstein  and  Tatsunori Hashimoto , associate and assistant professors of computer science and faculty members of the  Stanford Institute for Human-Centered Artificial Intelligence  (HAI). In addition to Gordon, Bernstein, and Hashimoto, the paper’s co-authors include Kaitlyn Zhou, a PhD candidate in computer science, and Kayur Patel, a researcher at Apple Inc.

To get a better measure of real-world views, the researchers developed an algorithm to filter out the “noise” — ambivalence, inconsistency, and misunderstanding — from how people label things like toxicity, leaving an estimate of the amount of true disagreement. They focused on how repeatedly each annotator labeled the same kind of language in the same way. The most consistent or dominant responses became what the researchers call "primary labels," which the researchers then used as a more precise dataset that captures more of the true range of opinions about potential toxic content.

The team then used that approach to refine datasets that are widely used to train AI models in spotting toxicity, misinformation, and pornography. By applying existing AI metrics to these new “disagreement-adjusted” datasets, the researchers revealed dramatically less confidence about decisions in each category. Instead of getting nearly perfect scores on all fronts, the AI models achieved only .73 ROCAUC in classifying toxicity and 62 percent accuracy in labeling misinformation. Even for pornography — as in, “I know it when I see it” — the accuracy was only .79.

Someone Will Always Be Unhappy. The Question Is Who?

Gordon says AI models, which must ultimately make a single decision, will never assess hate speech or cyberbullying to everybody’s satisfaction. There will always be vehement disagreement. Giving human annotators more precise definitions of hate speech may not solve the problem either, because people end up suppressing their real views in order to provide the “right” answer.

But if social media platforms have a more accurate picture of what people really believe, as well as which groups hold particular views, they can design systems that make more informed and intentional decisions.

In the end, Gordon suggests, annotators as well as social media executives will have to make value judgments with the knowledge that many decisions will always be controversial.

“Is this going to resolve disagreements in society? No,” says Gordon. “The question is what can you do to make people less unhappy. Given that you will have to make some people unhappy, is there a better way to think about whom you are making unhappy?”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition.  Learn more . 

Why AI Struggles To Recognize Toxic Speech on Social Media - by Edmund L. Andrews -  Human-Centered Artificial Intelligence  - July 13, 2021

Via :  hai.stanford.edu

Stanford University

  • Maps & Directions
  • Search Stanford
  • Terms of Use
  • Copyright Complaints

©  Stanford University , Stanford , California 94305

  • Share full article

Advertisement

Supported by

What to Know About the Supreme Court Arguments on Social Media Laws

Both Florida and Texas passed laws regulating how social media companies moderate speech online. The laws, if upheld, could fundamentally alter how the platforms police their sites.

A view of the Supreme Court building.

By David McCabe

McCabe reported from Washington.

Social media companies are bracing for Supreme Court arguments on Monday that could fundamentally alter the way they police their sites.

After Facebook, Twitter and YouTube barred President Donald J. Trump in the wake of the Jan. 6, 2021, riots at the Capitol, Florida made it illegal for technology companies to ban from their sites a candidate for office in the state. Texas later passed its own law prohibiting platforms from taking down political content.

Two tech industry groups, NetChoice and the Computer & Communications Industry Association, sued to block the laws from taking effect. They argued that the companies have the right to make decisions about their own platforms under the First Amendment, much as a newspaper gets to decide what runs in its pages.

So what’s at stake?

The Supreme Court’s decision in those cases — Moody v. NetChoice and NetChoice v. Paxton — is a big test of the power of social media companies, potentially reshaping millions of social media feeds by giving the government influence over how and what stays online.

“What’s at stake is whether they can be forced to carry content they don’t want to,” said Daphne Keller, a lecturer at Stanford Law School who filed a brief with the Supreme Court supporting the tech groups’ challenge to the Texas and Florida laws. “And, maybe more to the point, whether the government can force them to carry content they don’t want to.”

If the Supreme Court says the Texas and Florida laws are constitutional and they take effect, some legal experts speculate that the companies could create versions of their feeds specifically for those states. Still, such a ruling could usher in similar laws in other states, and it is technically complicated to accurately restrict access to a website based on location.

Critics of the laws say the feeds to the two states could include extremist content — from neo-Nazis, for example — that the platforms previously would have taken down for violating their standards. Or, the critics say, the platforms could ban discussion of anything remotely political by barring posts about many contentious issues.

What are the Florida and Texas social media laws?

The Texas law prohibits social media platforms from taking down content based on the “viewpoint” of the user or expressed in the post. The law gives individuals and the state’s attorney general the right to file lawsuits against the platforms for violations.

The Florida law fines platforms if they permanently ban from their sites a candidate for office in the state. It also forbids the platforms from taking down content from a “journalistic enterprise” and requires the companies to be upfront about their rules for moderating content.

Proponents of the Texas and Florida laws, which were passed in 2021, say that they will protect conservatives from the liberal bias that they say pervades the platforms, which are based in California.

“People the world over use Facebook, YouTube, and X (the social-media platform formerly known as Twitter) to communicate with friends, family, politicians, reporters, and the broader public,” Ken Paxton, the Texas attorney general, said in one legal brief. “And like the telegraph companies of yore, the social media giants of today use their control over the mechanics of this ‘modern public square’ to direct — and often stifle — public discourse.”

Chase Sizemore, a spokesman for the Florida attorney general, said the state looked “forward to defending our social media law that protects Floridians.” A spokeswoman for the Texas attorney general did not provide a comment.

What are the current rights of social media platforms?

They now decide what does and doesn’t stay online.

Companies including Meta’s Facebook and Instagram, TikTok, Snap, YouTube and X have long policed themselves, setting their own rules for what users are allowed to say while the government has taken a hands-off approach.

In 1997, the Supreme Court ruled that a law regulating indecent speech online was unconstitutional, differentiating the internet from mediums where the government regulates content. The government, for instance, enforces decency standards on broadcast television and radio.

For years, bad actors have flooded social media with misleading information , hate speech and harassment, prompting the companies to come up with new rules over the last decade that include forbidding false information about elections and the pandemic. Platforms have banned figures like the influencer Andrew Tate for violating their rules, including against hate speech.

But there has been a right-wing backlash to these measures, with some conservatives accusing the platforms of censoring their views — and even prompting Elon Musk to say he wanted to buy Twitter in 2022 to help ensure users’ freedom of speech.

What are the social media platforms arguing?

The tech groups say that the First Amendment gives the companies the right to take down content as they see fit, because it protects their ability to make editorial choices about the content of their products.

In their lawsuit against the Texas law, the groups said that just like a magazine’s publishing decision, “a platform’s decision about what content to host and what to exclude is intended to convey a message about the type of community that the platform hopes to foster.”

Still, some legal scholars are worried about the implications of allowing the social media companies unlimited power under the First Amendment, which is intended to protect the freedom of speech as well as the freedom of the press.

“I do worry about a world in which these companies invoke the First Amendment to protect what many of us believe are commercial activities and conduct that is not expressive,” said Olivier Sylvain, a professor at Fordham Law School who until recently was a senior adviser to the Federal Trade Commission chair, Lina Khan.

How does this affect Big Tech’s liability for content?

A federal law known as Section 230 of the Communications Decency Act shields the platforms from lawsuits over most user content. It also protects them from legal liability for how they choose to moderate that content.

That law has been criticized in recent years for making it impossible to hold the platforms accountable for real-world harm that flows from posts they carry, including online drug sales and terrorist videos.

The cases being argued on Monday do not challenge that law head-on. But the Section 230 protections could play a role in the broader arguments over whether the court should uphold the Texas and Florida laws. And the state laws would indeed create new legal liability for the platforms if they take down certain content or ban certain accounts.

Last year, the Supreme Court considered two cases, directed at Google’s YouTube and Twitter, that sought to limit the reach of the Section 230 protections. The justices declined to hold the tech platforms legally liable for the content in question.

What comes next?

The court will hear arguments from both sides on Monday. A decision is expected by June.

Legal experts say the court may rule that the laws are unconstitutional, but provide a road map on how to fix them. Or it may uphold the companies’ First Amendment rights completely.

Carl Szabo, the general counsel of NetChoice, which represents companies including Google and Meta and lobbies against tech regulations, said that if the group’s challenge to the laws fails, “Americans across the country would be required to see lawful but awful content” that could be construed as political and therefore covered by the laws.

“There’s a lot of stuff that gets couched as political content,” he said. “Terrorist recruitment is arguably political content.”

But if the Supreme Court rules that the laws violate the Constitution, it will entrench the status quo: Platforms, not anybody else, will determine what speech gets to stay online.

Adam Liptak contributed reporting.

David McCabe covers tech policy. He joined The Times from Axios in 2019. More about David McCabe

Greater Good Science Center • Magazine • In Action • In Education

Media & Tech Articles & More

How to use social media wisely and mindfully, it's time to be clear about how social media affects our relationships and well-being—and what our intentions are each time we log on..

It was no one other than Facebook’s former vice president for user growth, Chamath Palihapitiya, who advised people to take a “hard break” from social media. “We have created tools that are ripping apart the social fabric of how society works,” he said recently .

His comments echoed those of Facebook founding president Sean Parker . Social media provides a “social validation feedback loop (‘a little dopamine hit…because someone liked or commented on a photo or a post’),” he said. “That’s exactly the thing a hacker like myself would come up with because you’re exploiting a vulnerability in human psychology.”

Are their fears overblown? What is social media doing to us as individuals and as a society?

write a speech about uses and abuses of social media

Since over 70 percent of American teens and adults are on Facebook and over 1.2 billion users visit the site daily—with the average person spending over 90 minutes a day on all social media platforms combined—it’s vital that we gain wisdom about the social media genie, because it’s not going back into the bottle. Our wish to connect with others and express ourselves may indeed come with unwanted side effects.

The problems with social media

Social media is, of course, far from being all bad. There are often tangible benefits that follow from social media use. Many of us log on to social media for a sense of belonging, self-expression, curiosity, or a desire to connect. Apps like Facebook and Twitter allow us to stay in touch with geographically dispersed family and friends, communicate with like-minded others around our interests, and join with an online community to advocate for causes dear to our hearts.

Honestly sharing about ourselves online can enhance our feelings of well-being and online social support, at least in the short term. Facebook communities can help break down the stigma and negative stereotypes of illness, while social media, in general, can “serve as a spring board” for the “more reclusive…into greater social integration,” one study suggested.

But Parker and Palihapitiya are on to something when they talk about the addictive and socially corrosive qualities of social media. Facebook “addiction” (yes, there’s a test for this) looks similar on an MRI scan in some ways to substance abuse and gambling addictions. Some users even go to extremes to chase the highs of likes and followers. Twenty-six-year-old Wu Yongning recently fell to his death in pursuit of selfies precariously taken atop skyscrapers.

Facebook can also exacerbate envy . Envy is nothing if not corrosive of the social fabric, turning friendship into rivalry, hostility, and grudges. Social media tugs at us to view each other’s “highlight reels,” and all too often, we feel ourselves lacking by comparison. This can fuel personal growth, if we can turn envy into admiration, inspiration, and self-compassion ; but, instead, it often causes us to feel dissatisfied with ourselves and others.

For example, a 2013 study by Ethan Kross and colleagues showed quite definitively that the more time young adults spent on Facebook, the worse off they felt. Participants were texted five times daily for two weeks to answer questions about their well-being, direct social contact, and Facebook use. The people who spent more time on Facebook felt significantly worse later on, even after controlling for other factors such as depression and loneliness. 

Interestingly, those spending significant time on Facebook, but also engaging in moderate or high levels of direct social contact, still reported worsening well-being. The authors hypothesized that the comparisons and negative emotions triggered by Facebook were carried into real-world contact, perhaps damaging the healing power of in-person relationships.

More recently, Holly Shakya and Nicholas Christakis studied 5,208 adult Facebook users over two years, measuring life satisfaction and mental and physical health over time. All these outcomes were worse with greater Facebook use, and the way people used Facebook (e.g., passive or active use, liking, clicking, or posting) didn’t seem to matter.

“Exposure to the carefully curated images from others’ lives leads to negative self-comparison, and the sheer quantity of social media interaction may detract from more meaningful real-life experiences,” the researchers concluded.

How to rein in social media overuse

So, what can we do to manage the downsides of social media? One idea is to log out of Facebook completely and take that “hard break.” Researcher Morten Tromholt of Denmark found that after taking a one-week break from Facebook, people had higher life satisfaction and positive emotions compared to people who stayed connected. The effect was especially pronounced for “heavy Facebook users, passive Facebook users, and users who tend to envy others on Facebook.”

We can also become more mindful and curious about social media’s effects on our minds and hearts, weighing the good and bad. We should ask ourselves how social media makes us feel and behave, and decide whether we need to limit our exposure to social media altogether (by logging out or deactivating our accounts) or simply modify our social media environment. Some people I’ve spoken with find ways of cleaning up their newsfeeds—from hiding everyone but their closest friends to “liking” only reputable news, information, and entertainment sources.

Knowing how social media affects our relationships, we might limit social media interactions to those that support real-world relationships. Instead of lurking or passively scrolling through a never-ending bevy of posts, we can stop to ask ourselves important questions, like What are my intentions? and What is this online realm doing to me and my relationships?

We each have to come to our own individual decisions about social media use, based on our own personal experience. Grounding ourselves in the research helps us weigh the good and bad and make those decisions. Though the genie is out of the bottle, we may find, as Shakya and Christakis put it, that “online social interactions are no substitute for the real thing,” and that in-person, healthy relationships are vital to society and our own individual well-being. We would do well to remember that truth and not put all our eggs in the social media basket.

About the Author

Ravi Chandra

Ravi Chandra

Ravi Chandra is a psychiatrist, writer, and compassion educator in San Francisco, and a distinguished fellow of the American Psychiatric Association. Here’s his linktree .

You May Also Enjoy

How—and Why—to Take Your Life Back from Email

This article — and everything on this site — is funded by readers like you.

Become a subscribing member today. Help us continue to bring “the science of a meaningful life” to you and to millions around the globe.

  • International edition
  • Australia edition
  • Europe edition

Be flexible … the language we use online doesn’t have to reflect everyday speech.

How the internet changed the way we write – and what to do about it

The usual evolution of English has been accelerated online, leading to a less formal – but arguably more expressive – language than the one we use IRL. So use those emojis wisely …

English has always evolved – that’s what it means to be a living language – and now the internet plays a pivotal role in driving this evolution. It’s where we talk most freely and naturally, and where we generally pay little heed to whether or not our grammar is “correct”.

Should we be concerned that, as a consequence, English is deteriorating? Is it changing at such a fast pace that older generations can’t keep up? Not quite. At a talk in 2013, linguist David Crystal , author of Internet Linguistics, said: “The vast majority of English is exactly the same today as it was 20 years ago.” And his collected data indicated that even e-communication isn’t wildly different: “Ninety per cent or so of the language you use in a text is standard English, or at least your local dialect.”

It’s why we can still read an 18th-century transcript of a speech George Washington gave to his troops and understand it in its entirety, and why grandparents don’t need a translator when sending an email to their grandchildren.

However, the way we communicate – the punctuation (or lack thereof), the syntax, the abbreviations we use – is dependent on context and the medium with which we are communicating. We don’t need to reconcile the casual way we talk in a text or on social media with, say, the way we string together sentences in a piece of journalism, because they’re different animals.

On Twitter, emojis and new-fangled uses of punctuation, for instance, open doors to more nuanced casual expression. For example, the ~quirky tilde pair~ or full. stops. in. between. words. for. emphasis. While you are unlikely to find a breezy caption written in all lowercase and without punctuation in the New York Times, you may well find one in a humorous post published on BuzzFeed .

As the author of the BuzzFeed Style Guide , I crafted a set of guidelines that were flexible and applicable to hard news stories as well as the more lighthearted posts our platform publishes, such as comical lists and takes on celebrity goings-on, as well as to our social media posts. For instance, I decided, along with my team of copy editors, to include a rule that we should put emojis outside end punctuation not inside, because the consensus was that it simply looks cleaner to end a sentence as you normally would and then use an emoji. Our style guide also has comprehensive sections on how to write appropriately about serious topics, such as sexual assault and suicide.

Language shifts and proliferates due to chance and external factors, such as the influence the internet has on slang and commonplace abbreviations. (I believe that “due to” and “because of” can be used interchangeably, because it’s the way we use those phrases in speech; using one rather than the other has no impact on clarity.) So while some of Strunk and White’s famous grammar and usage rules – for example, avoiding the passive voice, never ending a sentence with a preposition – are no longer valuable, it doesn’t mean we’re putting clarity at stake. Sure, there’s no need to hyphenate a modifying phrase that includes an adverb – as in, for example, “a successfully executed plan” – because adverbs by definition modify the words they precede, but putting a hyphen after “successfully” would be no cause for alarm. It’s still a perfectly understandable expression.

Writers and editors, after consulting their house style guide, should rely on their own judgment when faced with a grammar conundrum. Prescriptivism has the potential to make a piece of writing seem dated or stodgy. That doesn’t mean we need to pepper our prose with emojis or every slang word of the moment. It means that by observing the way we’re using words and applying those observations methodically, we increase our chances of connecting with our readers – prepositions at the end of sentences and all. Descriptivism FTW!

  • Digital media
  • Social media

More on this story

write a speech about uses and abuses of social media

Independent in talks to take over BuzzFeed and HuffPost in UK and Ireland

write a speech about uses and abuses of social media

‘Like Icarus – now everyone is burnt’: how Vice and BuzzFeed fell to earth

Comments (…), most viewed.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Cybercrime Victimization and Problematic Social Media Use: Findings from a Nationally Representative Panel Study

Eetu marttila.

Economic Sociology, Department of Social Research, University of Turku, Assistentinkatu 7, 20014 Turku, Finland

Aki Koivula

Pekka räsänen, associated data.

The survey data used in this study will be made available through via Finnish Social Science Data Archive (FSD, http://www.fsd.uta.fi/en/ ) after the manuscript acceptance. The data are also available from the authors on scholarly request.

Analyses were run with Stata 16.1. The code is also available from the authors on request for replication purposes.

According to criminological research, online environments create new possibilities for criminal activity and deviant behavior. Problematic social media use (PSMU) is a habitual pattern of excessive use of social media platforms. Past research has suggested that PSMU predicts risky online behavior and negative life outcomes, but the relationship between PSMU and cybercrime victimization is not properly understood. In this study, we use the framework of routine activity theory (RAT) and lifestyle-exposure theory (LET) to examine the relationship between PSMU and cybercrime victimization. We analyze how PSMU is linked to cybercrime victimization experiences. We explore how PSMU predicts cybercrime victimization, especially under those risky circumstances that generally increase the probability of victimization. Our data come from nationally representative surveys, collected in Finland in 2017 and 2019. The results of the between-subjects tests show that problematic PSMU correlates relatively strongly with cybercrime victimization. Within-subjects analysis shows that increased PSMU increases the risk of victimization. Overall, the findings indicate that, along with various confounding factors, PSMU has a notable cumulative effect on victimization. The article concludes with a short summary and discussion of the possible avenues for future research on PSMU and cybercrime victimization.

Introduction

In criminology, digital environments are generally understood as social spaces which open new possibilities for criminal activity and crime victimization (Yar, 2005 ). Over the past decade, social media platforms have established themselves as the basic digital infrastructure that governs daily interactions. The rapid and vast adaptation of social media technologies has produced concern about the possible negative effects, but the association between social media use and decreased wellbeing measures appears to be rather weak (Appel et al., 2020 ; Kross et al., 2020 ). Accordingly, researchers have proposed that the outcomes of social media use depend on the way platforms are used, and that the negative outcomes are concentrated among those who experience excessive social media use (Kross et al., 2020 ; Wheatley & Buglass, 2019 ). Whereas an extensive body of research has focused either on cybercrime victimization or on problematic social media use, few studies have focused explicitly on the link between problematic use and victimization experiences (e.g., Craig et al., 2020 ; Longobardi et al., 2020 ).

As per earlier research, the notion of problematic use is linked to excessive and uncontrollable social media usage, which is characterized by compulsive and routinized thoughts and behavior (e.g., Kuss & Griffiths, 2017 ). The most frequently used social scientific and criminological accounts of risk factors of victimization are based on routine activity theory (RAT) (Cohen & Felson, 1979 ) and lifestyle-exposure theory (LET) (Hindelang et al., 1978 ). Although RAT and LET were originally developed to understand how routines and lifestyle patterns may lead to victimization in physical spaces, they have been applied in online environments (e.g., Milani et al., 2020 ; Räsänen et al., 2016 ).

As theoretical frameworks, RAT and LET presume that lifestyles and routine activities are embedded in social contexts, which makes it possible to understand behaviors and processes that lead to victimization. The excessive use of social media platforms increases the time spent in digital environments, which, according to lifestyle and routine activities theories, tends to increase the likelihood of ending up in dangerous situations. Therefore, we presume that problematic use is a particularly dangerous pattern of use, which may increase the risk of cybercrime victimization.

In this study, we employ the key elements of RAT and LET to focus on the relationship between problematic social media use and cybercrime victimization. Our data come from high quality, two-wave longitudinal population surveys, which were collected in Finland in 2017 and 2019. First, we examine the cross-sectional relationship between problematic use and victimization experiences at Wave 1, considering the indirect effect of confounding factors. Second, we test for longitudinal effects by investigating whether increased problematic use predicts an increase in victimization experiences at Wave 2.

Literature Review

Problematic social media use.

Over the last few years, the literature on the psychological, cultural, and social effects of social media has proliferated. Prior research on the topic presents a nuanced view of social media and its consequences (Kross et al., 2020 ). For instance, several studies have demonstrated that social media use may produce positive outcomes, such as increased life satisfaction, social trust, and political participation (Kim & Kim, 2017 ; Valenzuela et al., 2009 ). The positive effects are typically explained to follow from use that satisfy individuals’ socioemotional needs, such as sharing emotions and receiving social support on social media platforms (Pang, 2018 ; Verduyn et al., 2017 ).

However, another line of research associates social media use with several negative effects, including higher stress levels, increased anxiety and lower self-esteem (Kross et al., 2020 ). Negative outcomes, such as depression (Shensa et al., 2017 ), decreased subjective well-being (Wheatley & Buglass, 2019 ) and increased loneliness (Meshi et al., 2020 ), are also commonly described in the research literature. The most common mechanisms that are used to explain negative outcomes of social media use are social comparison and fear of missing out (Kross et al., 2020 ). In general, it appears that the type of use that does not facilitate interpersonal connection is more detrimental to users’ health and well-being (Clark et al., 2018 ).

Even though the earlier research on the subject has produced somewhat contradictory results, the researchers generally agree that certain groups of users are at more risk of experiencing negative outcomes of social media use. More specifically, the researchers have pointed out that there is a group of individuals who have difficulty controlling the quantity and intensity of their use of social media platforms (Kuss & Griffiths, 2017 ). Consequently, new concepts, such as problematic social media use (Bányai et al., 2017 ) and social networking addiction (Griffiths et al., 2014 ) have been developed to assess excessive use. In this research, we utilize the concept of problematic social media use (PSMU), which is applied broadly in the literature. In contrast to evidence of social media use in general, PSMU consistently predicts negative outcomes in several domains of life, including decreased subjective well-being (Kross et al., 2013 ; Wheatley & Buglass, 2019 ), depression (Hussain & Griffiths, 2018 ), and loneliness (Marttila et al., 2021 ).

To our knowledge, few studies have focused explicitly on the relationship between PSMU and cybercrime victimization. One cross-national study of young people found that PSMU is consistently and strongly associated with cyberbullying victimization across countries (Craig et al., 2020 ) and another one of Spanish adolescents returned similar results (Martínez-Ferrer et al., 2018 ). Another study of Italian adolescents found that an individual’s number of followers on Instagram was positively associated with experiences of cybervictimization (Longobardi et al., 2020 ). A clear limitation of the earlier studies is that they focused on adolescents and often dealt with cyberbullying or harassment. Therefore, the results are not straightforwardly generalizable to adult populations or to other forms of cybercrime victimization. Despite this, there are certain basic assumptions about cybercrime victimization that must be considered.

Cybercrime Victimization, Routine Activity, and Lifestyle-Exposure Theories

In criminology, the notion of cybercrime is used to refer to a variety of illegal activities that are performed in online networks and platforms through computers and other devices (Yar & Steinmetz, 2019 ). As a concept, cybercrime is employed in different levels of analysis and used to describe a plethora of criminal phenomena, ranging from individual-level victimization to large-scale, society-wide operations (Donalds & Osei-Bryson, 2019 ). In this study, we define cybercrime as illegal activity and harm to others conducted online, and we focus on self-reported experiences of cybercrime victimization. Therefore, we do not address whether respondents reported an actual crime victimization to the authorities.

In Finland and other European countries, the most common types of cybercrime include slander, hacking, malware, online fraud, and cyberbullying (see Europol, 2019 ; Meško, 2018 ). Providing exact estimates of cybercrime victims has been a challenge for previous criminological research, but 1 to 15 percent of the European population is estimated to have experienced some sort of cybercrime victimization (Reep-van den Bergh & Junger, 2018 ). Similarly, it is difficult to give a precise estimate of the prevalence of social media-related criminal activity. However, as a growing proportion of digital interactions are mediated by social media platforms, we can expect that cybercrime victimization on social media is also increasing. According to previous research, identity theft (Reyns et al., 2011 ), cyberbullying (Lowry et al., 2016 ), hate speech (Räsänen et al., 2016 ), and stalking (Marcum et al., 2017 ) are all regularly implemented on social media. Most of the preceding studies have focused on cybervictimization of teenagers and young adults, which are considered the most vulnerable population segments (e.g., Hawdon et al., 2017 ; Keipi et al.,  2016 ).

One of the most frequently used conceptual frameworks to explain victimization is routine activity theory (RAT) (Cohen & Felson, 1979 ). RAT claims that the everyday routines of social actors place individuals at risk for victimization by exposing them to dangerous people, places, and situations. The theory posits that a crime is more likely to occur when a motivated offender, a suitable target, and a lack of capable guardians converge in space and time (Cohen & Felson, 1979 ). RAT is similar to lifestyle-exposure theory (LET), which aims to understand the ways in which lifestyle patterns in the social context allow different forms of victimization (Hindelang et al., 1978 ).

In this study, we build our approach on combining RAT and LET in order to examine risk-enhancing behaviors and characteristics fostered by online environment. Together, these theories take the existence of motivated offenders for granted and therefore do not attempt to explain their involvement in crime. Instead, we concentrate on how routine activities and lifestyle patterns, together with the absence of a capable guardian, affect the probability of victimization.

Numerous studies have investigated the applicability of LET and RAT for cybercrime victimization (e.g., Holt & Bosser, 2008 , 2014 ; Leukfeldt & Yar, 2016 ; Näsi et al., 2017 ; Vakhitova et al., 2016 , 2019 ; Yar, 2005 ). The results indicate that different theoretical concepts are operationalizable to online environments to varying degrees, and that some operationalizations are more helpful than others (Näsi et al., 2017 ). For example, the concept of risk exposure is considered to be compatible with online victimization, even though earlier studies have shown a high level of variation in how the risk exposure is measured (Vakhitova et al., 2016 ). By contrast, target attractiveness and lack of guardianship are generally considered to be more difficult to operationalize in the context of technology-mediated victimization (Leukfeldt & Yar, 2016 ).

In the next section, we will take a closer look at how the key theoretical concepts LET and RAT have been operationalized in earlier studies on cybervictimization. Here, we focus solely on factors that we can address empirically with our data. Each of these have successfully been applied to online environments in prior studies (e.g., Hawdon et al., 2017 ; Keipi et al., 2016 ).

Confounding Elements of Lifestyle and Routine Activities Theories and Cybercrime Victimization

Exposure to risk.

The first contextual component of RAT/LET addresses the general likelihood of experiencing risk situations. Risk exposure has typically been measured by the amount of time spent online or the quantity of different online activities – the hours spent online, the number of online accounts, the use of social media services (Hawdon et al., 2017 ; Vakhitova et al., 2019 ). The studies that have tested the association have returned mixed results, and it seems that simply the time spent online does not predict increased victimization (e.g., Ngo & Paternoster, 2011 ; Reyns et al., 2011 ). On the other hand, the use of social media platforms (Bossler et al., 2012 ; Räsänen et al., 2016 ) and the number of accounts in social networks are associated with increased victimization (Reyns et al., 2011 ).

Regarding the association between the risk of exposure and victimization experiences, previous research has suggested that specific online activities may increase the likelihood of cybervictimization. For example, interaction with other users is associated with increased victimization experiences, whereas passive use may protect from cybervictimization (Holt & Bossler, 2008 ; Ngo & Paternoster, 2011 ; Vakhitova et al., 2019 ). In addition, we assume that especially active social media use, such as connecting with new people, is a risk factor and should be taken into account by measuring the proximity to offenders in social media.

Proximity to Offenders

The second contextual component of RAT/LET is closeness to the possible perpetrators. Previously, proximity to offenders was typically measured by the amount of self-disclosure in online environments, such as the number of followers on social media platforms (Vakhitova et al., 2019 ). Again, earlier studies have returned inconsistent results, and the proximity to offenders has mixed effects on the risk victimization. For example, the number of online friends does not predict increased risk of cybercrime victimization (Näsi et al., 2017 ; Räsänen et al., 2016 ; Reyns et al., 2011 ). By contrast, a high number of social media followers (Longobardi et al., 2020 ) and online self-disclosures are associated with higher risk of victimization (Vakhitova et al., 2019 ).

As in the case of risk exposure, different operationalizations of proximity to offenders may predict victimization more strongly than others. For instance, compared to interacting with friends and family, contacting strangers online may be much riskier (Vakhitova et al., 2016 ). Earlier studies support this notion, and allowing strangers to acquire sensitive information about oneself, as well as frequent contact with strangers on social media, predict increased risk for cybervictimization (Craig et al., 2020 ; Reyns et al., 2011 ). Also, compulsive online behavior is associated with a higher probability of meeting strangers online (Gámez-Guadix et al., 2016 ), and we assume that PSMU use may be associated with victimization indirectly through contacting strangers.

Target Attractiveness

The third contextual element of RAT/LET considers the fact that victimization is more likely among those who share certain individual and behavioral traits. Such traits can be seen to increase attractiveness to offenders and thereby increase the likelihood of experiencing risk situations. Earlier studies on cybercrime victimization have utilized a wide selection of measures to operationalize target attractiveness, including gender and ethnic background (Näsi et al., 2017 ), browsing risky content (Räsänen et al., 2016 ), financial status (Leukfeldt & Yar, 2016 ) or relationship status, and sexual orientation (Reyns et al., 2011 ).

In general, these operationalizations do not seem to predict victimization reliably or effectively. Despite this, we suggest that certain operationalizations of target attractiveness may be valuable. Past research on the different uses of social media has suggested that provocative language or expressions of ideological points of view can increase victimization. More specifically, political activity is a typical behavioral trait that tends to provoke reactions in online discussions (e.g. , Lutz & Hoffmann, 2017 ). In studies of cybervictimization, online political activity is associated with increased victimization (Vakhitova et al., 2019 ). Recent studies have also emphasized how social media have brought up and even increased political polarization (van Dijk & Hacker, 2018 ).

In Finland, the main division has been drawn between the supporters of the populist right-wing party, the Finns, and the supporters of the Green League and the Left Alliance (Koiranen et al., 2020 ). However, it is noteworthy that Finland has a multi-party system based on socioeconomic cleavages represented by traditional parties, such as the Social Democratic Party of Finland, the National Coalition Party, and the Center Party (Koivula et al., 2020 ). Indeed, previous research has shown that there is relatively little affective polarization in Finland (Wagner, 2021 ). Therefore, in the Finnish context it is unlikely that individuals would experience large-scale victimization based on their party preference.

Lack of Guardianship

The fourth element of RAT/LET assesses the role of social and physical guardianship against harmful activity. The lack of guardianship is assumed to increase victimization, and conversely, the presence of capable guardianship to decrease the likelihood victimization (Yar, 2005 ). In studies of online activities and routines, different measures of guardianship have rarely acted as predictors of victimization experiences (Leukfeldt & Yar, 2016 ; Vakhitova et al., 2016 ).

Regarding social guardianship, measures such as respondents’ digital skills and online risk awareness have been used, but with non-significant results (Leukfeldt & Yar, 2016 ). On the other hand, past research has indicated that victims of cyber abuse in general are less social than non-victims, which indicates that social networks may protect users from abuse online (Vakhitova et al., 2019 ). Also, younger users, females, and users with low educational qualifications are assumed to have weaker social guardianship against victimization and therefore are in more vulnerable positions (e.g., Keipi et al., 2016 ; Pratt & Turanovic, 2016 ).

In terms of physical guardianship, several technical measures, such as the use of firewalls and virus scanners, have been utilized in past research (Leukfeldt & Yar, 2016 ). In a general sense, technical security tools function as external settings in online interactions, similar to light, which may increase the identifiability of the aggressor in darkness. Preceding studies, however, have found no significant connection between technical guardianship and victimization (Vakhitova et al., 2016 ). Consequently, we decided not to address technical guardianship in this study.

Based on the preceding research findings discussed above, we stated the following two hypotheses:

  • H1: Increased PSMU associates with increased cybercrime victimization.
  • H2: The association between PSMU and cybercrime victimization is confounded by factors assessing exposure to risk, proximity to offenders, target attractiveness, and lack of guardianship.

Research Design

Our aim was to analyze how problematic use of social media is linked to cybercrime victimization experiences. According to RAT and LET, cybercrime victimization relates to how individuals’ lifestyles expose them to circumstances that increase the probability of victimization (Hindelang et al., 1978 ) and how individuals behave in different risky environments (Engström, 2020 ). Our main premise is that PSMU exposes users more frequently to environments that increase the likelihood of victimization experiences.

We constructed our research in two separate stages on the basis of the two-wave panel setting. In the first stage, we approached the relationship between PSMU and cybercrime victimization cross-sectionally by using a large and representative sample of the Finnish population aged 18–74. We also analyzed the extent to which the relationship between PSMU and cybercrime victimization was related to the confounders. In the second stage of analysis, we paid more attention to longitudinal effects and tested for the panel effects, examining changes in cybercrime victimization in relation to changes in PSMU.

Participants

We utilized two-wave panel data that were derived from the first and second rounds of the Digital Age in Finland survey. The cross-sectional study was based on the first round of the survey, organized in December 2017, for a total of 3,724 Finns. In this sample, two-thirds of the respondents were randomly sampled from the Finnish population register, and one-third were supplemented from a demographically balanced online respondent pool organized by Taloustutkimus Inc. We analyzed social media users ( N  = 2,991), who accounted for 77% of the original data. The data over-represented older citizens, which is why post-stratifying weights were applied to correspond with the official population distribution of Finns aged 18–74 (Sivonen et al., 2019 ).

To form a longitudinal setting, respondents were asked whether they were willing to participate in the survey a second time about a year after the first data collection. A total of 1,708 participants expressed willingness to participate in the follow-up survey that was conducted 15 months after the first round, in March 2019. A total of 1,134 people participated in the follow-up survey, comprising a response rate of 67% in the second round.

The question form was essentially the same for both rounds of data collection.

The final two-wave data used in the second-stage of analysis mirrored on population characteristics in terms of gender (males 50.8%) and age (M = 49.9, SD  = 16.2) structures. However, data were unrepresentative in terms of education and employment status when compared to the Finnish population: tertiary level education was achieved by 44.5% of participants and only 50.5% of respondents were employed. The data report published online shows a more detailed description of the data collection and its representativeness (Sivonen et al., 2019 ).

Our dependent variable measured whether the participants had been a target of cybercrime. Cybercrime was measured with five dichotomous questions inquiring whether the respondent had personally: 1) been targeted by threat or attack on social media, 2) been falsely accused online, 3) been targeted with hateful or degrading material on the Internet, 4) experienced sexual harassment on social media, and 5) been subjected to account stealing. 1 In the first round, 159 respondents (14.0%) responded that they had been the victim of cybercrime. In the second round, the number of victimization experiences increased by about 6 percentage points, as 71 respondents had experienced victimization during the observation period.

Our main independent variable was problematic social media use (PSMU). Initially, participants’ problematic and excessive social media usage was measured through an adaptation of the Compulsive Internet Use Scale (CIUS) , which consists of 14 items ratable on a 5-point Likert scale (Meerkerk et al., 2009 ). Our measure included five items on a 4-point scale scored from 1 (never) to 4 (daily) based on how often respondents: 1) “Have difficulties with stopping social media use,” 2)”'Have been told by others you should use social media less,” 3) “Have left important work, school or family related things undone due to social media use,” 4) “Use social media to alleviate feeling bad or stress,” and 5) “Plan social media use beforehand.”

For our analysis, all five items were used to create a new three-level variable to assess respondents’ PSMU at different intensity levels. If the respondent was experiencing daily or weekly at least one of the signs of problematic use daily, PSMU was coded as at least weekly . Second, if the respondent was experiencing less than weekly at least one of the signs of problematic use, PSMU was coded as occasionally. Finally, if the respondent was not experiencing any signs of problematic use, PSMU was coded to none.

To find reliable estimates for the effects of PSMU, we controlled for general social media use , including respondents’ activity on social networking sites and instant messenger applications. We combined two items to create a new four-level variable to measure respondents’ social media use (SMU). If a respondent reported using either social media platforms (e.g., Facebook, Twitter), instant messengers (e.g., WhatsApp, Facebook Messenger) or both many hours per day, we coded their activity as high . We coded activity as medium , if respondents reported using social media daily . Third, we coded activity as low for those respondents who reported using social media only on a weekly basis. Finally, we considered activity as very low if respondents reported using platforms or instant messengers less than weekly.

Confounding variables were related to participants’ target attractiveness, proximity to offenders, and potential guardianship factors.

Target attractiveness was measured by online political activity . Following previous studies (Koiranen et al., 2020 ; Koivula et al., 2019 ), we formed the variable based on four single items: following political discussions, participating in political discussions, sharing political content, and creating political content. Participants’ activity was initially determined by means of a 5-point scale (1 = Never, 2 = Sometimes, 3 = Weekly, 4 = Daily, and 5 = Many times per day). For analysis purposes, we first separated “politically inactive” users, who reported never using social media for political activities. Second, we coded as “followers” participants who only followed but never participated in the political discussions in social media. Third, we classified as “occasional participants” those who at least sometimes participated in political activities on social media. Finally, those participants who at least weekly used social media to participate in political activities were classified as “active participants.”

Proximity to offenders was considered by analyzing contacting strangers on social media . Initially, the question asked the extent to which respondents were in contact with strangers on social media, evaluated with a 5-point interval scale, from 1 ( Not at all ) to 5 ( Very much ). For the analysis, we merged response options 1 and 2 to form value 1, and 4 and 5 to form 3. Consequently, we used a three-level variable to measure respondents’ tendency to contact strangers on social media, in which 1 = Low, 2 = Medium, and 3 = High intensity.

Lack of guardianship was measured by gender, age, education, and main activity. Respondent’s gender (1 =  Male , 2 =  Female ), age (in years), level of education, and main activity were measured. While these variables could also be placed under target attractiveness, we placed them here. This is because background characteristics the variables measure are often invisible in online environments and exist only in terms of expressed behavior (e.g., Keipi et al., 2016 ). For statistical analysis, we classified education and main activity into binary variables. Education was measured with a binary variable that implied whether the respondent had achieved at least tertiary level education or not. The dichotomization can be justified by relatively high educational levels in Finland, where tertiary education is often considered as cut-off point between educated and non-educated citizens (Leinsalu et al., 2020 ). Main activity was measured with a binary variable that differentiated unemployed respondents from others (working, retirees, and full-time students). Regarding the lack of guardianship, unemployed people are less likely to relate to informal peer-networks occurring at workplaces or educational establishments, a phenomenon that also takes place in many senior citizens’ activities. Descriptive statistics for all measurements are provided in (Table ​ (Table1 1 ).

Descriptive statistics for the applied variables

Analytic techniques

The analyses were performed in two different stages with STATA 16. In the cross-sectional approach we analyzed the direct and indirect associations between PSMU and cybercrime victimization. We reported average marginal effects and their standard errors with statistical significances (Table ​ (Table2.). 2 .). The main effect of PSMU was illustrated in Fig.  1 by utilizing a user-written coefplot package (Jann, 2014 ).

The likelihood of cybercrime victimization according to confounding and control variables. Average marginal effects (AME) with standard errors estimated from the logit models

Standard errors in parentheses

*** p  < 0.001, ** p  < 0.01, * p  < 0.05

An external file that holds a picture, illustration, etc.
Object name is 12103_2021_9665_Fig1_HTML.jpg

Likelihood of cybercrime victimization according to the level of problematic social media use. Predicted probabilities with 95% confidence intervals

When establishing the indirect effects, we used the KHB-method developed by Karlson et al. ( 2012 ) and employed the khb command in Stata (Kohler et al., 2011 ). The KHB method decomposes the total effect of an independent variable into direct and indirect via a confounding / mediating variable (Karlson et al., 2012 ). Based on decomposition analysis, we reported logit coefficients for the total effect, direct effects, and indirect effects with statistical significances and confounding percentages (Table ​ (Table3 3 .).

The decomposition of effect of PSMU on online victimization with respect to confounding factors. The logit coefficients estimated using the KHB method

In the second stage, we analyzed the panel effects. We used hybrid mixed models to distinguish two time-varying factors: between-person effects and within-person effects, and predicted changes in cybercrime victimization with respect to changes in problematic social media use. We also tested how the relationship between cybercrime victimization and other time-varying variables changed over the observation period. The hybrid models were performed by using the xthybrid command (Schunck & Perales, 2017 ).

The results for our first hypothesis are presented in Fig.  1 . The likelihood of becoming a victim of cybercrime increased significantly as PSMU increased. Respondents who reported problematic use on a daily basis experienced cybercrime with a probability of more than 40%. The probability of becoming a victim was also high, 30%, if problematic use occurred weekly.

The models predicting cybercrime victimization are shown in Table ​ Table2. 2 . In the first model (M1), PSMU significantly predicted the risk of victimization if a participant reported even occasional problematic use (AME 0.06; p  < 0.001). If the respondent reported problematic use weekly (AME 0.17; p  < 0.001) or daily (AME 0.33; p  < 0.001), his or her probability of becoming a victim was significantly higher.

The next three models (M2-M4) were constructed on the basis of variables measuring risk exposure, proximity to offenders, and target attractiveness. The second model (M2) indicates that highly intensive social media use (AME 0.19, p  < 0.001) was related to cybercrime victimization. The third (M3) model presents that those who reported low intensity of meeting strangers online had lower probability of being victims (AME -0.11, p  < 0.001) and those who reported high intensity had higher probability (AME 0.12, p  < 0.05). Finally, the fourth (M4) model suggests that political activity was related to victimization: those who reported participating occasionally (AME 0.07, p  < 0.01) and actively (AME 0.14, p  < 0.001) had higher probability of being a victim.

Next, we evaluated how different guardianship factors were related to victimization. The fifth model (M5) indicates that age, gender, and economic activity were identified as significant protective factors. According to the results, older (AME -0.01, p  < 0.001) and male (AME -0.04, p  < 0.001) participants were less likely to be targets of cybercrime. Interestingly, higher education or unemployment was not related to victimization. Finally, the fifth model also suggests that the effect of PSMU remained significant even after controlling for confounding and control variables.

We decomposed the fifth model to determine how different confounding and control variables affected the relationship between PSMU and victimization. The results of the decomposition analysis are shown in Table ​ Table3. First, 3 . First, the factors significantly influenced the association between PSMU and victimization ( B  = 0.38, p  < 0.001), which means that the confounding percentage of background factors was 58.7%. However, the total effect of PSMU remained significant ( B  = 0.27, p  < 0.001). Age was the most significant factor in the association between PSMU and victimization ( B  = 0.14; p  < 0.001), explaining 36% of the total confounding percentage. Political activity was also a major contributing factor ( B  = 0.12, p  < 0.001) that explained 31.2% of the total confounding percentage. The analysis also revealed that meeting strangers online significantly confounded the relationship between PSMU and victimization ( B  = 0.7, p  < 0.001).

In the second stage, we examined the longitudinal effects of PSMU on cybercrime victimization using panel data from Finnish social media users. We focused on the factors varying in short term, that is why we also analyzed the temporal effects of SMU, contacting strangers online, and online political activity on victimization. The demographic factors that did not change over time or for which temporal variability did not vary across clusters (such as age) were not considered in the second stage.

Table ​ Table4 4 shows the hybrid models predicting each variable separately. The within-effects revealed that increased PSMU increased individuals’ probability of being victimized during the observation period ( B  = 0.77, p  = 0.02). Moreover, the between-effects of PSMU was significant ( B  = 2.00, p  < 0.001), indicating that increased PSMU was related to individuals’ higher propensity to be victimized over the observation period.

Unadjusted logit coefficients of cybercrime victimization according to PSMU and confounding variables from hybrid generalized mixed models

Each variable modelled separately

We could not find significant within-subject effects in terms of other factors. However, the between-effects indicated that SMU ( B  = 2.00, p  < 0.001), low intensity of meeting strangers online ( B  = -3.27, p  < 0.001), and online political participation ( B  = 2.08, p  < 0.001) distinguished the likelihood of individuals being victimized.

Over the last decade, social media has revolutionized the way people communicate and share information. As the everyday lives of individuals are increasingly mediated by social media technologies, some users may experience problems with excessive use. In prior studies, problematic use has been associated with many negative life outcomes, ranging from psychological disorders to economic consequences.

The main objective of this study was to determine whether PSMU is also linked to increased cybercrime victimization. First, we examined how PSMU associates with cybercrime victimization and hypothesized that increased PSMU associates with increased cybercrime victimization (H1). Our findings from the cross-sectional study indicated that PSMU is a notable predictor of victimization. In fact, daily reported problematic use increased the likelihood of cybercrime victimization by more than 30 percentage points. More specifically, the analysis showed that more than 40% of users who reported experiencing problematic use daily reported being victims of cybercrime, while those who never experienced problematic use had a probability of victimization of slightly over 10%.

We also examined how PSMU captures other risk factors contributing to cybercrime victimization. Here, we hypothesized that the association between PSMU and cybercrime victimization is mediated by exposure to risk, proximity to offenders, target attractiveness, and lack of guardianship (H2). The decomposition analysis indicated that confounding factors explained over 50 percent of the total effect of PSMU. A more detailed analysis showed that the association between PSMU and cybercrime victimization was related to respondents’ young age, online political activity, activity to meet strangers online, and intensity of general social media use. This means that PSMU and victimization are linked to similar factors related to routine activities and lifestyle that increase the target's attractiveness, proximity to offenders and lack of guardianship. Notably, the effect of PSMU remained significant even after controlling for the confounding factors.

In the longitudinal analysis, we confirmed the first hypothesis and found that increased PSMU was associated with increased cybercrime victimization in both within- and between-subject analyses. The result indicated a clear link between problematic use and cybercrime experiences during the observation period: as problematic use increases, so does the individual’s likelihood of becoming a victim of cybercrime. At the same time, according to the between-subject analysis, it also appears that cybercrime experiences are generally more likely to increase for those who experience more problematic use. Interestingly, we could not find within-subject effects in terms of other factors. This means, for example, that individuals' increased encounters with strangers or increased online political activity were not directly reflected in the likelihood of becoming a victim during the observation period. The between-subject analyses, however, indicated that an individual’s increased propensity to be victimized is related to higher level of social media activity, intensity of meeting strangers online, and online political activity over time.

Our findings are consistent with those of preceding research pointing to the fact that cybervictimization is indeed a notable threat, especially to those already in vulnerable circumstances (Keipi et al., 2016 ). The probabilities of cybercrime risk vary in online interactional spaces, depending on the absence and presence of certain key components suggested in our theoretical framework. Despite the seriousness of our findings, recent statistics indicate that cybercrime victimization is still relatively rare in Finland. In 2020, seven percent of Finnish Internet users had experienced online harassment, and 13 percent reported experiencing unwelcome advances during the previous three months (OSF, 2020 ). However, both forms of cybercrime victimization are clearly more prevalent among younger people and those who use social media frequently.

Cybercrime is becoming an increasingly critical threat as social media use continues to spread throughout segments of the population. Certain online activities and routinized behaviors can be considered to be particularly risky and to increase the probability of cybercrime victimization. In our study, we have identified problematic social media use as a specific behavioral pattern or lifestyle that predicts increased risk of becoming a victim of cybercrime.

Although the overall approach of our study was straightforward, the original theoretical concepts are ambiguously defined and alternative meanings have been given to them. It follows that the empirical operationalization of the concepts was not in line with some studies looking at the premises of RAT and LET framework. Indeed, different empirical measures have been employed to address the basic elements associating with risks of victimization (e.g., Hawdon et al., 2017 ; Pratt & Turanovic, 2016 ). In our investigation, we focused on selected online activities and key socio-demographic background factors.

Similarly, we need to be cautious when discussing the implications of our findings. First, our study deals with one country alone, which means that the findings cannot be generalized beyond Finland or beyond the timeline 2017 to 2019. This means that our findings may not be applicable to the highly specific time of the COVID-19 pandemic when online activities have become more versatile than ever before. In addition, although our sample was originally drawn from the national census database, some response bias probably exists in the final samples. Future research should use longitudinal data that better represent, for example, different socio-economic groups. We also acknowledge that we did not control for the effect of offline social relations on the probability of cybercrime risk. Despite these limitations, we believe our study has significance for contemporary cybercrime research.

Our study shows that PSMU heightens the risk of cybercrime victimization. Needless to say, future research should continue to identify specific activities that comprise “dangerous” lifestyles online, which may vary from one population group to another. In online settings, there are a variety of situations and circumstances that are applicable to different forms of cybercrime. For instance, lack of basic online skills regarding cybersecurity can work like PSMU.

In general, our findings contribute to the assumption that online and offline victimization should not necessarily be considered distinct phenomena. Therefore, our theoretical framework, based on RAT and LET, seems highly justified. Our observations contribute to an increasing body of research that demonstrates how routine activities and lifestyle patterns of individuals can be applied to crimes committed in the physical world, as well as to crimes occurring in cyberspace.

Biographies

is a PhD student at the Unit of Economic Sociology, University of Turku, Finland. Marttila is interested in the use of digital technologies, risks, and well-being.

is a University Lecturer at the Unit of Economic Sociology, University of Turku, Finland. Koivula’s research deals with political preferences, consumer behavior and use of online platforms.

is Professor of Economic Sociology at University of Turku, Finland. His current research interests are in digital inequalities and online hate speech in platform economy.

Open Access funding provided by University of Turku (UTU) including Turku University Central Hospital. This study was funded by the Strategic Research Council of the Academy of Finland (decision number 314171).

Data Availability

Code availability, declarations.

The authors declare no conflicts of interest.

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

2) Have you been falsely accused online?

3) Have you been targeted with hateful or degrading material on the Internet?

4) Have you experienced sexual harassment social media?

5) Has your online account been stolen or a new account made with your name without your permission?

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Appel M, Marker C, Gnambs T. Are social media ruining our lives? A review of meta-analytic evidence. Review of General Psychology. 2020; 24 (1):60–74. doi: 10.1177/1089268019880891. [ CrossRef ] [ Google Scholar ]
  • Bányai, F., Zsila, Á., Király, O., Maraz, A., Elekes, Z., Griffiths, M. D., et al. (2017). Problematic social media use: Results from a large-scale nationally representative adolescent sample. PLoS ONE , 12 (1). 10.1371/journal.pone.0169839 [ PMC free article ] [ PubMed ]
  • Bossler AM, Holt TJ, May DC. Predicting online harassment victimization among a juvenile population. Youth & Society. 2012; 44 (4):500–523. doi: 10.1177/0044118X11407525. [ CrossRef ] [ Google Scholar ]
  • Clark JL, Algoe SB, Green MC. Social network sites and well-being: The role of social connection. Current Directions in Psychological Science. 2018; 9 :44–49. doi: 10.1016/j.copsyc.2015.10.006. [ CrossRef ] [ Google Scholar ]
  • Cohen LE, Felson M. Social change and crime rate trends: A routine activity approach. American Sociological Review. 1979; 44 (4):588–608. doi: 10.2307/2094589. [ CrossRef ] [ Google Scholar ]
  • Craig W, Boniel-Nissim M, King N, Walsh SD, Boer M, Donnelly PD, et al. Social media use and cyber-bullying: A cross-national analysis of young people in 42 countries. Journal of Adolescent Health. 2020; 66 (6):S100–S108. doi: 10.1016/j.jadohealth.2020.03.006. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Donalds C, Osei-Bryson KM. Toward a cybercrime classification ontology: A knowledge-based approach. Computers in Human Behavior. 2019; 92 :403–418. doi: 10.1016/j.chb.2018.11.039. [ CrossRef ] [ Google Scholar ]
  • Engström A. Conceptualizing lifestyle and routine activities in the early 21st century: A systematic review of self-report measures in studies on direct-contact offenses in young populations. Crime & Delinquency. 2020; 67 (5):737–782. doi: 10.1177/0011128720937640. [ CrossRef ] [ Google Scholar ]
  • Europol (2019). European Union serious and organised crime threat assessment. Online document, available at: https://ec.europa.eu/home-affairs/what-we-do/policies/cybercrime_en
  • Gámez-Guadix M, Borrajo E, Almendros C. Risky online behaviors among adolescents: Longitudinal relations among problematic Internet use, cyberbullying perpetration, and meeting strangers online. Journal of Behavioral Addictions. 2016; 5 (1):100–107. doi: 10.1556/2006.5.2016.013. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Griffiths, M. D., Kuss, D. J., & Demetrovics, Z. (2014). Social networking addiction: An overview of preliminary findings. In K. P. Rosenberg & L. C. B. T.-B. A. Feder (Eds.), Behavioral addictions: Criteria, evidence, and treatment (pp. 119–141). San Diego: Academic Press. 10.1016/B978-0-12-407724-9.00006-9
  • Hawdon J, Oksanen A, Räsänen P. Exposure to online hate in four nations: A cross-national consideration. Deviant Behavior. 2017; 38 (3):254–266. doi: 10.1080/01639625.2016.1196985. [ CrossRef ] [ Google Scholar ]
  • Hindelang MJ, Gottfredson MR, Garofalo J. Victims of personal crime: An empirical foundation for a theory of personal victimization. Ballinger Publishing Co; 1978. [ Google Scholar ]
  • Holt TJ, Bossler AM. Examining the applicability of lifestyle-routine activities theory for cybercrime victimization. Deviant Behavior. 2008; 30 (1):1–25. doi: 10.1080/01639620701876577. [ CrossRef ] [ Google Scholar ]
  • Holt TJ, Bossler AM. An assessment of the current state of cybercrime scholarship. Deviant Behavior. 2014; 35 (1):20–40. doi: 10.1080/01639625.2013.822209. [ CrossRef ] [ Google Scholar ]
  • Hussain, Z., & Griffiths, M. D. (2018). Problematic social networking site use and comorbid psychiatric disorders: A systematic review of recent large-scale studies. Frontiers in Psychiatry , 9 (686). 10.3389/fpsyt.2018.00686 [ PMC free article ] [ PubMed ]
  • Jann, B. (2014). Plotting regression coefficients and other estimates . The Stata Journal , 14 (4), 708–737. 10.1177%2F1536867X1401400402
  • Karlson, K. B., Holm, A., & Breen, R. (2012). Comparing regression coefficients between same-sample nested models using logit and probit: A new method. Sociological methodology, 42 (1), 286–313. 10.1177%2F0081175012444861
  • Keipi, T., Näsi, M., Oksanen, A., & Räsänen, P. (2016). Online hate and harmful content: Cross-national perspectives. Taylor & Francis. http://library.oapen.org/handle/20.500.12657/22350
  • Kim B, Kim Y. College students’ social media use and communication network heterogeneity: Implications for social capital and subjective well-being. Computers in Human Behavior. 2017; 73 :620–628. doi: 10.1016/j.chb.2017.03.033. [ CrossRef ] [ Google Scholar ]
  • Kohler, U., Karlson, K. B., & Holm, A. (2011). Comparing coefficients of nested nonlinear probability models. The Stata Journal, 11 (3), 420–438. 10.1177/1536867X1101100306
  • Koivula A, Kaakinen M, Oksanen A, Räsänen P. The role of political activity in the formation of online identity bubbles. Policy & Internet. 2019; 11 (4):396–417. doi: 10.1002/poi3.211. [ CrossRef ] [ Google Scholar ]
  • Koivula A, Koiranen I, Saarinen A, Keipi T. Social and ideological representativeness: A comparison of political party members and supporters in Finland after the realignment of major parties. Party Politics. 2020; 26 (6):807–821. doi: 10.1177/1354068818819243. [ CrossRef ] [ Google Scholar ]
  • Koiranen I, Koivula A, Saarinen A, Keipi T. Ideological motives, digital divides, and political polarization: How do political party preference and values correspond with the political use of social media? Telematics and Informatics. 2020; 46 :101322. doi: 10.1016/j.tele.2019.101322. [ CrossRef ] [ Google Scholar ]
  • Kross E, Verduyn P, Demiralp E, Park J, Lee DS, Lin N, et al. Facebook use predicts declines in subjective well-being in young adults. PLoS ONE. 2013; 8 (8):e69841. doi: 10.1371/journal.pone.0069841. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kross E, Verduyn P, Sheppes G, Costello CK, Jonides J, Ybarra O. Social media and well-being: Pitfalls, progress, and next steps. Trends in Cognitive Sciences. 2020; 25 (1):55–66. doi: 10.1016/j.tics.2020.10.005. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kuss D, Griffiths M. Social networking sites and addiction: Ten lessons learned. International Journal of Environmental Research and Public Health. 2017; 14 (3):311. doi: 10.3390/ijerph14030311. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Leinsalu M, Baburin A, Jasilionis D, Krumins J, Martikainen P, Stickley A. Economic fluctuations and urban-rural differences in educational inequalities in mortality in the Baltic countries and Finland in 2000–2015: A register-based study. International Journal for Equity in Health. 2020; 19 (1):1–6. doi: 10.1186/s12939-020-01347-5. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Leukfeldt ER, Yar M. Applying routine activity theory to cybercrime: A theoretical and empirical analysis. Deviant Behavior. 2016; 37 (3):263–280. doi: 10.1080/01639625.2015.1012409. [ CrossRef ] [ Google Scholar ]
  • Longobardi C, Settanni M, Fabris MA, Marengo D. Follow or be followed: Exploring the links between Instagram popularity, social media addiction, cyber victimization, and subjective happiness in Italian adolescents. Children and Youth Services Review. 2020; 113 :104955. doi: 10.1016/j.childyouth.2020.104955. [ CrossRef ] [ Google Scholar ]
  • Lowry PB, Zhang J, Wang C, Siponen M. Why do adults engage in cyberbullying on social media? An integration of online disinhibition and deindividuation effects with the social structure and social learning model. Information Systems Research. 2016; 27 (4):962–986. doi: 10.1287/isre.2016.0671. [ CrossRef ] [ Google Scholar ]
  • Lutz C, Hoffmann CP. The dark side of online participation: Exploring non-, passive and negative participation. Information, Communication & Society. 2017; 20 (6):876–897. doi: 10.1080/1369118X.2017.1293129. [ CrossRef ] [ Google Scholar ]
  • Marcum CD, Higgins GE, Nicholson J. I’m watching you: Cyberstalking behaviors of university students in romantic relationships. American Journal of Criminal Justice. 2017; 42 (2):373–388. doi: 10.1007/s12103-016-9358-2. [ CrossRef ] [ Google Scholar ]
  • Martínez-Ferrer B, Moreno D, Musitu G. Are adolescents engaged in the problematic use of social networking sites more involved in peer aggression and victimization? Frontiers in Psychology. 2018; 9 :801. doi: 10.3389/fpsyg.2018.00801. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Marttila E, Koivula A, Räsänen P. Does excessive social media use decrease subjective well-being? A longitudinal analysis of the relationship between problematic use, loneliness and life satisfaction. Telematics and Informatics. 2021; 59 :101556. doi: 10.1016/j.tele.2020.101556. [ CrossRef ] [ Google Scholar ]
  • Meerkerk GJ, Van Den Eijnden RJJM, Vermulst AA, Garretsen HFL. The Compulsive Internet Use Scale (CIUS): Some psychometric properties. Cyberpsychology and Behavior. 2009; 12 (1):1–6. doi: 10.1089/cpb.2008.0181. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Meshi D, Cotten SR, Bender AR. Problematic social media use and perceived social isolation in older adults: A cross-sectional study. Gerontology. 2020; 66 (2):160–168. doi: 10.1159/000502577. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Meško G. On some aspects of cybercrime and cybervictimization. European Journal of Crime, Criminal Law and Criminal Justice. 2018; 26 (3):189–199. doi: 10.1163/15718174-02603006. [ CrossRef ] [ Google Scholar ]
  • Milani R, Caneppele S, Burkhardt C. Exposure to cyber victimization: Results from a Swiss survey. Deviant Behavior. 2020 doi: 10.1080/01639625.2020.1806453. [ CrossRef ] [ Google Scholar ]
  • Näsi M, Räsänen P, Kaakinen M, Keipi T, Oksanen A. Do routine activities help predict young adults’ online harassment: A multi-nation study. Criminology and Criminal Justice. 2017; 17 (4):418–432. doi: 10.1177/1748895816679866. [ CrossRef ] [ Google Scholar ]
  • Ngo FT, Paternoster R. Cybercrime victimization: An examination of individual and situational level factors. International Journal of Cyber Criminology. 2011; 5 (1):773–793. [ Google Scholar ]
  • Official Statistics of Finland (OSF) (2020). Väestön tieto- ja viestintätekniikan käyttö [online document]. ISSN=2341–8699. 2020, Liitetaulukko 29. Vihamielisten viestien näkeminen, häirinnän kokeminen ja epäasiallisen lähestymisen kohteeksi joutuminen sosiaalisessa mediassa 2020, %-osuus väestöstä. Helsinki: Tilastokeskus. Available at: http://www.stat.fi/til/sutivi/2020/sutivi_2020_2020-11-10_tau_029_fi.html
  • Pang H. How does time spent on WeChat bolster subjective well-being through social integration and social capital? Telematics and Informatics. 2018; 35 (8):2147–2156. doi: 10.1016/j.tele.2018.07.015. [ CrossRef ] [ Google Scholar ]
  • Pratt TC, Turanovic JJ. Lifestyle and routine activity theories revisited: The importance of “risk” to the study of victimization. Victims & Offenders. 2016; 11 (3):335–354. doi: 10.1080/15564886.2015.1057351. [ CrossRef ] [ Google Scholar ]
  • Reep-van den Bergh CMM, Junger M. Victims of cybercrime in Europe: A review of victim surveys. Crime Science. 2018; 7 (1):1–15. doi: 10.1186/s40163-018-0079-3. [ CrossRef ] [ Google Scholar ]
  • Reyns BW, Henson B, Fisher BS. Being pursued online. Criminal Justice and Behavior. 2011; 38 (11):1149–1169. doi: 10.1177/0093854811421448. [ CrossRef ] [ Google Scholar ]
  • Räsänen P, Hawdon J, Holkeri E, Keipi T, Näsi M, Oksanen A. Targets of online hate: Examining determinants of victimization among young Finnish Facebook users. Violence and Victims. 2016; 31 (4):708–725. doi: 10.1891/0886-6708.vv-d-14-00079. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schunck, R., & Perales, F. (2017). Within- and between-cluster effects in generalized linear mixed models: A discussion of approaches and the xthybrid command. The Stata Journal , 17(1), 89–115. 10.1177%2F1536867X1701700106
  • Shensa A, Escobar-Viera CG, Sidani JE, Bowman ND, Marshal MP, Primack BA. Problematic social media use and depressive symptoms among U.S. young adults: A nationally-representative study. Social Science and Medicine. 2017; 182 :150–157. doi: 10.1016/j.socscimed.2017.03.061. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sivonen, J., Kuusela, A., Koivula, A., Saarinen, A., & Keipi, T. (2019). Working papers in economic sociology: Research Report on Finland in the Digital Age Round 2 Panel-survey . Turku.
  • Wagner M. Affective polarization in multiparty systems. Electoral Studies. 2021; 69 :102199. doi: 10.1016/j.electstud.2020.102199. [ CrossRef ] [ Google Scholar ]
  • Vakhitova ZI, Alston-Knox CL, Reynald DM, Townsley MK, Webster JL. Lifestyles and routine activities: Do they enable different types of cyber abuse? Computers in Human Behavior. 2019; 101 :225–237. doi: 10.1016/j.chb.2019.07.012. [ CrossRef ] [ Google Scholar ]
  • Vakhitova ZI, Reynald DM, Townsley M. Toward the adaptation of routine activity and lifestyle exposure theories to account for cyber abuse victimization. Journal of Contemporary Criminal Justice. 2016; 32 (2):169–188. doi: 10.1177/1043986215621379. [ CrossRef ] [ Google Scholar ]
  • Valenzuela S, Park N, Kee KF. Is there social capital in a social network site?: Facebook use and college student’s life satisfaction, trust, and participation. Journal of Computer-Mediated Communication. 2009; 14 (4):875–901. doi: 10.1111/j.1083-6101.2009.01474.x. [ CrossRef ] [ Google Scholar ]
  • Van Dijk JA, Hacker KL. Internet and democracy in the network society. Routledge. 2018 doi: 10.4324/9781351110716. [ CrossRef ] [ Google Scholar ]
  • Verduyn P, Ybarra O, Résibois M, Jonides J, Kross E. Do social network sites enhance or undermine subjective well-being? A critical review. Social Issues and Policy Review. 2017; 11 (1):274–302. doi: 10.1111/sipr.12033. [ CrossRef ] [ Google Scholar ]
  • Wheatley D, Buglass SL. Social network engagement and subjective well-being: A life-course perspective. The British Journal of Sociology. 2019; 70 (5):1971–1995. doi: 10.1111/1468-4446.12644. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Yar M. The novelty of ‘Cybercrime’ European Journal of Criminology. 2005; 2 (4):407–427. doi: 10.1177/147737080556056. [ CrossRef ] [ Google Scholar ]
  • Yar, M., & Steinmetz, K. F. (2019). Cybercrime and society . SAGE Publications Limited.

IMAGES

  1. ≫ Effects of Social Media Addiction Free Essay Sample on Samploon.com

    write a speech about uses and abuses of social media

  2. ≫ Harm Effects of Social Media on Adolescents Free Essay Sample on

    write a speech about uses and abuses of social media

  3. Addiction Of Social Media PPT Template and Google Slides

    write a speech about uses and abuses of social media

  4. How to report abuse on Social Media

    write a speech about uses and abuses of social media

  5. ≫ Free Speech on Social Media Free Essay Sample on Samploon.com

    write a speech about uses and abuses of social media

  6. ⇉The Negative Effects of Social Media Sites on Young Adults Essay

    write a speech about uses and abuses of social media

VIDEO

  1. Write a dialogue about use and abuse of social media ।। Use and abuse of social media dialogue

  2. Debate On How Social Media Has Improved Human Communication

  3. Social Media Trap (Dark Reality) 🧐 #viral #youtube #trending #socialmedia #reality

  4. Essay "Uses and abuses of Social Media" in English . Roll of Social Media in our Life

  5. Merits and Demerits of Social Media Write a dialogue between two friends Conversation

  6. The Crab Pharmacist #short #shorts #story

COMMENTS

  1. Social media harms teens' mental health, mounting evidence shows. What now?

    The concern, and the studies, come from statistics showing that social media use in teens ages 13 to 17 is now almost ubiquitous. Two-thirds of teens report using TikTok, and some 60 percent of ...

  2. Social media brings benefits and risks to teens. Psychology can help

    Adolescents should be routinely screened for signs of "problematic social media use" that can impair their ability to engage in daily roles and routines, and may present risk for more serious psychological harms over time. The use of social media should be limited so as to not interfere with adolescents' sleep and physical activity.

  3. Supreme Court tackles social media and free speech : NPR

    Supreme Court tackles social media and free speech In a major First Amendment case, the Supreme Court heard arguments on the federal government's ability to combat what it sees as false, ...

  4. The Struggle for Human Attention: Between the Abuse of Social Media and

    Human attention has become an object of study that defines both the design of interfaces and the production of emotions in a digital economy ecosystem. Guided by the control of users' attention, the consumption figures for digital environments, mainly social media, show that addictive use is associated with multiple psychological, social, and ...

  5. Misinformation, manipulation, and abuse on social media in the era of

    Contributions. In light of the previous considerations, the purpose of this special issue was to collect contributions proposing models, methods, empirical findings, and intervention strategies to investigate and tackle the abuse of social media along several dimensions that include (but are not limited to) infodemics, misinformation, automation, online harassment, false information, and ...

  6. Social Media's Moral Reckoning

    Social media firms police our speech and behavior based on a set of byzantine rules where companies are judge and jury. And they track our every digital move across the Web and monetize the ...

  7. The State of Online Harassment

    The latest survey finds that 75% of targets of online abuse - equaling 31% of Americans overall - say their most recent experience was on social media. As online harassment permeates social media, the public is highly critical of the way these companies are tackling the issue. Fully 79% say social media companies are doing an only fair or ...

  8. Cyberbullying: What is it and how can you stop it?

    Cyberbullying occurs when someone uses technology to demean, inflict harm, or cause pain to another person. It is "willful and repeated harm inflicted through the use of computers, cell phones, and other electronic devices.". Perpetrators bully victims in any online setting, including social media, video or computer games, discussion boards ...

  9. Hate Speech on Social Media: Global Comparisons

    Summary. Hate speech online has been linked to a global increase in violence toward minorities, including mass shootings, lynchings, and ethnic cleansing. Policies used to curb hate speech risk ...

  10. What Is the Best Way to Stop Abusive Language Online?

    In the statement, Richard Masters, the Premier League's chief executive, said the league would continue to push social media companies to make changes to prevent online abuse. "Racist ...

  11. Online abuse and the threat it poses to free speech

    Viktorya Vilk, PEN America's program director of Digital Safety and Free Expression, discusses how online abuse directly affects freedom of speech as well as the different ways social media companies can stem online abuse and support their abused users. This statement was originally published on pen.org on 2 April 2021.

  12. Uses and Abuses of Social Media

    Uses and Abuses of Social Media. Lisa Garbe (WZB - Berlin Social Science Center), Marc Owen Jones (Hamad bin Khalifa University), David Herbert (UiB) and Lovise Aalen (CMI) Social media have been hailed as the ultimate democratic tool, enabling users to self-organise and build communities, sometimes even contributing to the fall of ...

  13. Hate speech in social media: How platforms can do better

    The report recommends that social media platforms: 1) enforce their own rules; 2) use data from extremist sites to create detection models; 3) look for specific linguistic markers; 4) deemphasize profanity in toxicity detection; and 5) train moderators and algorithms to recognize that white supremacists' conversations are dangerous and hateful.

  14. How should social media platforms combat misinformation and hate speech

    Currently, social media companies have adopted two approaches to fight misinformation. The first one is to block such content outright. For example, Pinterest bans anti-vaccination content and ...

  15. 8 Films About the Uses and Abuses of Digital Media

    Social media, viral videos, conspiracy theories and "Sesame Street." These are a few of the topics students can explore in this collection about the benefits and dangers of digital media.

  16. Regulating free speech on social media is dangerous and futile

    Others argue that social media and technology companies should become more ideologically diverse and inclusive by hiring more conservatives. I believe in the value of ideological and intellectual ...

  17. Why AI Struggles To Recognize Toxic Speech on Social Media

    Automated speech police can score highly on technical tests but miss the mark with people, new research shows. Facebook says its artificial intelligence models identified and pulled down 27 million pieces of hate speech in the final three months of 2020. In 97 percent of the cases, the systems took action before humans had even flagged the posts.

  18. What to Know About the Supreme Court Case on Free Speech on Social

    Published Feb. 25, 2024 Updated Feb. 26, 2024. Social media companies are bracing for Supreme Court arguments on Monday that could fundamentally alter the way they police their sites. After ...

  19. How to Use Social Media Wisely and Mindfully

    Apps like Facebook and Twitter allow us to stay in touch with geographically dispersed family and friends, communicate with like-minded others around our interests, and join with an online community to advocate for causes dear to our hearts. Honestly sharing about ourselves online can enhance our feelings of well-being and online social support ...

  20. APA report calls on social media companies to take responsibility to

    The developers must address the dangers inherent in these platforms and make their products safe for youth.". APA has issued a new report as a follow-up to its 2023 health advisory focusing on social media design features and functions built into these platforms that are inherently unsafe for youth. The new report points to the psychological ...

  21. How the internet changed the way we write

    We don't need to reconcile the casual way we talk in a text or on social media with, say, the way we string together sentences in a piece of journalism, because they're different animals ...

  22. Social media use and abuse: Different profiles of users and their

    1.1. Problematic social media engagement in the context of addictions. Problematic social media use is markedly similar to the experience of substance addiction, thus leading to problematic social media use being modelled by some as a behavioural addiction - social media addiction (SMA; Sun and Zhang, 2020).In brief, an addiction loosely refers to a state where an individual experiences a ...

  23. Opinion

    By reverse engineering the noise and lack of crowd control that has overrun social media platforms, we can make the internet a more peaceful, reliable, less polarizing place. And we can do it ...

  24. Cybercrime Victimization and Problematic Social Media Use: Findings

    Consequently, new concepts, such as problematic social media use (Bányai et al., 2017) and social networking addiction (Griffiths et al., 2014) have been developed to assess excessive use. In this research, we utilize the concept of problematic social media use (PSMU), which is applied broadly in the literature.