Social Media - The Good, the Bad, & the Ugly ...




Executive Order on Preventing Online Censorship

Issued on: May 28, 2020

By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered as follows:

Section 1. Policy. Free speech is the bedrock of American democracy. Our Founding Fathers protected this sacred right with the First Amendment to the Constitution. The freedom to express and debate ideas is the foundation for all of our rights as a free people.

In a country that has long cherished the freedom of expression, we cannot allow a limited number of online platforms to hand pick the speech that Americans may access and convey on the internet. This practice is fundamentally un-American and anti-democratic. When large, powerful social media companies censor opinions with which they disagree, they exercise a dangerous power. They cease functioning as passive bulletin boards, and ought to be viewed and treated as content creators.

The growth of online platforms in recent years raises important questions about applying the ideals of the First Amendment to modern communications technology. Today, many Americans follow the news, stay in touch with friends and family, and share their views on current events through social media and other online platforms. As a result, these platforms function in many ways as a 21st century equivalent of the public square.

Twitter, Facebook, Instagram, and YouTube wield immense, if not unprecedented, power to shape the interpretation of public events; to censor, delete, or disappear information; and to control what people see or do not see.

As President, I have made clear my commitment to free and open debate on the internet. Such debate is just as important online as it is in our universities, our town halls, and our homes. It is essential to sustaining our democracy.

Online platforms are engaging in selective censorship that is harming our national discourse. Tens of thousands of Americans have reported, among other troubling behaviors, online platforms “flagging” content as inappropriate, even though it does not violate any stated terms of service; making unannounced and unexplained changes to company policies that have the effect of disfavoring certain viewpoints; and deleting content and entire accounts with no warning, no rationale, and no recourse.

Twitter now selectively decides to place a warning label on certain tweets in a manner that clearly reflects political bias. As has been reported, Twitter seems never to have placed such a label on another politician’s tweet. As recently as last week, Representative Adam Schiff was continuing to mislead his followers by peddling the long-disproved Russian Collusion Hoax, and Twitter did not flag those tweets. Unsurprisingly, its officer in charge of so-called ‘Site Integrity’ has flaunted his political bias in his own tweets.

At the same time online platforms are invoking inconsistent, irrational, and groundless justifications to censor or otherwise restrict Americans’ speech here at home, several online platforms are profiting from and promoting the aggression and disinformation spread by foreign governments like China. One United States company, for example, created a search engine for the Chinese Communist Party that would have blacklisted searches for “human rights,” hid data unfavorable to the Chinese Communist Party, and tracked users determined appropriate for surveillance. It also established research partnerships in China that provide direct benefits to the Chinese military. Other companies have accepted advertisements paid for by the Chinese government that spread false information about China’s mass imprisonment of religious minorities, thereby enabling these abuses of human rights. They have also amplified China’s propaganda abroad, including by allowing Chinese government officials to use their platforms to spread misinformation regarding the origins of the COVID-19 pandemic, and to undermine pro-democracy protests in Hong Kong.

As a Nation, we must foster and protect diverse viewpoints in today’s digital communications environment where all Americans can and should have a voice. We must seek transparency and accountability from online platforms, and encourage standards and tools to protect and preserve the integrity and openness of American discourse and freedom of expression.

Sec. 2. Protections Against Online Censorship. (a) It is the policy of the United States to foster clear ground rules promoting free and open debate on the internet. Prominent among the ground rules governing that debate is the immunity from liability created by section 230(c) of the Communications Decency Act (section 230(c)). 47 U.S.C. 230(c). It is the policy of the United States that the scope of that immunity should be clarified: the immunity should not extend beyond its text and purpose to provide protection for those who purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.

Section 230(c) was designed to address early court decisions holding that, if an online platform restricted access to some content posted by others, it would thereby become a “publisher” of all the content posted on its site for purposes of torts such as defamation. As the title of section 230(c) makes clear, the provision provides limited liability “protection” to a provider of an interactive computer service (such as an online platform) that engages in “‘Good Samaritan’ blocking” of harmful content. In particular, the Congress sought to provide protections for online platforms that attempted to protect minors from harmful content and intended to ensure that such providers would not be discouraged from taking down harmful material. The provision was also intended to further the express vision of the Congress that the internet is a “forum for a true diversity of political discourse.” 47 U.S.C. 230(a)(3). The limited protections provided by the statute should be construed with these purposes in mind.

In particular, subparagraph (c)(2) expressly addresses protections from “civil liability” and specifies that an interactive computer service provider may not be made liable “on account of” its decision in “good faith” to restrict access to content that it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.” It is the policy of the United States to ensure that, to the maximum extent permissible under the law, this provision is not distorted to provide liability protection for online platforms that — far from acting in “good faith” to remove objectionable content — instead engage in deceptive or pretextual actions (often contrary to their stated terms of service) to stifle viewpoints with which they disagree. Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor content and silence viewpoints that they dislike. When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct. It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.

(b) To advance the policy described in subsection (a) of this section, all executive departments and agencies should ensure that their application of section 230(c) properly reflects the narrow purpose of the section and take all appropriate actions in this regard. In addition, within 60 days of the date of this order, the Secretary of Commerce (Secretary), in consultation with the Attorney General, and acting through the National Telecommunications and Information Administration (NTIA), shall file a petition for rulemaking with the Federal Communications Commission (FCC) requesting that the FCC expeditiously propose regulations to clarify:

(i) the interaction between subparagraphs (c)(1) and (c)(2) of section 230, in particular to clarify and determine the circumstances under which a provider of an interactive computer service that restricts access to content in a manner not specifically protected by subparagraph (c)(2)(A) may also not be able to claim protection under subparagraph (c)(1), which merely states that a provider shall not be treated as a publisher or speaker for making third-party content available and does not address the provider’s responsibility for its own editorial decisions;

(ii) the conditions under which an action restricting access to or availability of material is not “taken in good faith” within the meaning of subparagraph (c)(2)(A) of section 230, particularly whether actions can be “taken in good faith” if they are:

(A) deceptive, pretextual, or inconsistent with a provider’s terms of service; or

(B) taken after failing to provide adequate notice, reasoned explanation, or a meaningful opportunity to be heard; and

(iii) any other proposed regulations that the NTIA concludes may be appropriate to advance the policy described in subsection (a) of this section.

Sec. 3. Protecting Federal Taxpayer Dollars from Financing Online Platforms That Restrict Free Speech. (a) The head of each executive department and agency (agency) shall review its agency’s Federal spending on advertising and marketing paid to online platforms. Such review shall include the amount of money spent, the online platforms that receive Federal dollars, and the statutory authorities available to restrict their receipt of advertising dollars.

(b) Within 30 days of the date of this order, the head of each agency shall report its findings to the Director of the Office of Management and Budget.

(c) The Department of Justice shall review the viewpoint-based speech restrictions imposed by each online platform identified in the report described in subsection (b) of this section and assess whether any online platforms are problematic vehicles for government speech due to viewpoint discrimination, deception to consumers, or other bad practices.

Sec. 4. Federal Review of Unfair or Deceptive Acts or Practices. (a) It is the policy of the United States that large online platforms, such as Twitter and Facebook, as the critical means of promoting the free flow of speech and ideas today, should not restrict protected speech. The Supreme Court has noted that social media sites, as the modern public square, “can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard.” Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017). Communication through these channels has become important for meaningful participation in American democracy, including to petition elected leaders. These sites are providing an important forum to the public for others to engage in free expression and debate. Cf. PruneYard Shopping Center v. Robins, 447 U.S. 74, 85-89 (1980).

(b) In May of 2019, the White House launched a Tech Bias Reporting tool to allow Americans to report incidents of online censorship. In just weeks, the White House received over 16,000 complaints of online platforms censoring or otherwise taking action against users based on their political viewpoints. The White House will submit such complaints received to the Department of Justice and the Federal Trade Commission (FTC).

(c) The FTC shall consider taking action, as appropriate and consistent with applicable law, to prohibit unfair or deceptive acts or practices in or affecting commerce, pursuant to section 45 of title 15, United States Code. Such unfair or deceptive acts or practice may include practices by entities covered by section 230 that restrict speech in ways that do not align with those entities’ public representations about those practices.

(d) For large online platforms that are vast arenas for public debate, including the social media platform Twitter, the FTC shall also, consistent with its legal authority, consider whether complaints allege violations of law that implicate the policies set forth in section 4(a) of this order. The FTC shall consider developing a report describing such complaints and making the report publicly available, consistent with applicable law.

Sec. 5. State Review of Unfair or Deceptive Acts or Practices and Anti-Discrimination Laws. (a) The Attorney General shall establish a working group regarding the potential enforcement of State statutes that prohibit online platforms from engaging in unfair or deceptive acts or practices. The working group shall also develop model legislation for consideration by legislatures in States where existing statutes do not protect Americans from such unfair and deceptive acts and practices. The working group shall invite State Attorneys General for discussion and consultation, as appropriate and consistent with applicable law.

(b) Complaints described in section 4(b) of this order will be shared with the working group, consistent with applicable law. The working group shall also collect publicly available information regarding the following:

(i) increased scrutiny of users based on the other users they choose to follow, or their interactions with other users;

(ii) algorithms to suppress content or users based on indications of political alignment or viewpoint;

(iii) differential policies allowing for otherwise impermissible behavior, when committed by accounts associated with the Chinese Communist Party or other anti-democratic associations or governments;

(iv) reliance on third-party entities, including contractors, media organizations, and individuals, with indicia of bias to review content; and

(v) acts that limit the ability of users with particular viewpoints to earn money on the platform compared with other users similarly situated.

Sec. 6. Legislation. The Attorney General shall develop a proposal for Federal legislation that would be useful to promote the policy objectives of this order.

Sec. 7. Definition. For purposes of this order, the term “online platform” means any website or application that allows users to create and share content or engage in social networking, or any general search engine.

Sec. 8. General Provisions. (a) Nothing in this order shall be construed to impair or otherwise affect:

(i) the authority granted by law to an executive department or agency, or the head thereof; or

(ii) the functions of the Director of the Office of Management and Budget relating to budgetary, administrative, or legislative proposals.

(b) This order shall be implemented consistent with applicable law and subject to the availability of appropriations.

(c) This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.

Source: https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship/
 
The MESO forum exists, in its current form, in large part thanks to Section 230 of the 1996 Communications Decency Act...

The Complex Debate Over Silicon Valley’s Embrace of Content Moderation
Many in tech cheered when Twitter added labels to President Trump’s tweets. But civil libertarians caution that social media companies are moving into uncharted waters.

In the midst of this notable shift, some civil libertarians are raising a question in an already complicated debate: Any move to moderate content more proactively could eventually be used against speech loved by the people now calling for intervention.

[...]
Civil libertarians caution that adding warning labels or additional context to posts raises a range of issues — issues that tech companies until recently had wanted to avoid. New rules often backfire. Fact checks and context, no matter how sober or accurate they are, can be perceived as politically biased. More proactive moderation by the platforms could threaten their special protected legal status. And intervention goes against the apolitical self-image that some in the tech world have.

[...]

Section 230 of the federal Communications Decency Act, passed in 1996, shields tech platforms from being held liable for the third-party content that circulates on them. But taking a firmer hand to what appears on their platforms could endanger that protection, most of all, for political reasons.

One of the few things that Democrats and Republicans in Washington agree on is that changes to Section 230 are on the table. Mr. Trump issued an executive order calling for changes to it after Twitter added labels to some of his tweets. Former Vice President Joseph R. Biden Jr., the presumptive Democratic presidential nominee, has also called for changes to Section 230.

“You repeal this and then we’re in a different world,” said Josh Blackman, a constitutional law professor at the South Texas College of Law Houston. “Once you repeal Section 230, you’re now left with 51 imperfect solutions.”

section-230-cda.jpg

Source: The Complex Debate Over Silicon Valley’s Embrace of Content Moderation
 
The MESO forum exists, in its current form, in large part thanks to Section 230 of the 1996 Communications Decency Act...

The Complex Debate Over Silicon Valley’s Embrace of Content Moderation
Many in tech cheered when Twitter added labels to President Trump’s tweets. But civil libertarians caution that social media companies are moving into uncharted waters.
Two Different Proposals to Amend Section 230 Share A Similar Goal: Damage Online Users’ Speech

Whether we know it or not, all Internet users rely on multiple online services to connect, engage, and express themselves online. That means we also rely on 47 U.S.C. § 230 (“Section 230”), which provides important legal protections when platforms offer their services to the public and when they moderate the content that relies on those services, from the proverbial cat video to an incendiary blog post.

Section 230 is an essential legal pillar for online speech. And when powerful people don’t like that speech, or the platforms that host it, the provision becomes a scapegoat for just about every tech-related problem. Over the past few years, those attacks have accelerated; on Wednesday, we saw two of the most dangerous proposals yet, one from the Department of Justice, and the other from Sen. Josh Hawley

The proposals take different approaches, but they both seek to create new legal regimes that will allow public officials or private individuals to bury platforms in litigation simply because they do not like how those platforms offer their services. Basic activities like offering encryption, or editing, removing, or otherwise moderating users’ content could lead to years of legal costs and liability risk. That’s bad for platforms—and for the rest of us.

DOJ’s Proposal Attacks Encryption and Would Make Everyone’s Internet Experience Less Secure

The Department of Justice’s Section 230 proposal harms Internet users and gives the Attorney General more weapons to retaliate against online services he dislikes. It proposes four categories of reform to Section 230.

First, it claims that platforms need greater incentive to remove illegal user-generated content and proposes that Section 230 should not apply to what it calls “Bad Samaritans.” Platforms that knowingly host illegal material or content that a court has ruled is illegal would lose protections from civil liability, including for hosting material depicting terrorism or cyber-stalking. The proposal also mirrors the EARN IT Act by attacking encryption: it conditions 230 immunity on whether the service maintains “the ability to assist government authorities to obtain content (i.e., evidence) in a comprehensible, readable, and usable format pursuant to court authorization (or any other lawful basis).”

Second, it would allow it and other federal agencies to initiate civil enforcement actions against online services that they believed were hosting illegal content.

Third, the proposal seeks to “clarify that federal antitrust claims are not covered by Section 230 immunity.”

Finally, the proposal eliminates key language from Section 230 that gives online services the discretion to remove content they deem to be objectionable and defines the statute’s “good faith” standard to require platforms to explain all of their decisions to moderate users’ content.

The DOJ’s proposal would eviscerate Section 230’s protections and, much like the EARN IT Act introduced earlier this year, is a direct attack on encryption. Like EARN IT, the DOJ’s proposal does not use the word encryption anywhere. But in practice the proposal ensures that any platform providing secure end-to-end encryption would face a torrent of litigation—surely no accident given the Attorney General’s repeated efforts to outlaw encryption.

Other aspects of the DOJ’s “Bad Samaritan” proposals are problematic, too. Although the proposal claims that bad actors would be limited to platforms that knowingly host illegal material online, the proposal targets other content that may be offensive but is nonetheless protected by the Constitution.

Additionally, requiring platforms to take down content deemed illegal via a court order will result in a significant increase in frivolous litigation about content that people simply don’t like. Many individuals already seek to use default court judgments and other mechanisms as a means to remove things from the Internet. The DOJ proposal requires platforms to honor even the most trollish court-ordered takedown.

Oddly, the DOJ also proposes punishing platforms for removing content from their services that is not illegal. Under current law, Section 230 gives platforms the discretion to remove harmful material such as spam, malware, or other offensive content, even if it isn’t illegal. We have many concerns about those moderation decisions, but removing that discretion altogether could make everyone’s experiences online much worse and potentially less safe.

It’s also unconstitutional: Section 230 notwithstanding, the First Amendment gives platforms the discretion to decide for themselves the type of content they want to host and in what form.

The proposal would also empower federal agencies, including the DOJ, to bring civil enforcement actions against platforms. Like last month’s Executive Order targeting online services, this would give the government new powers to target platforms that government officials or the President do not like. It also ignores that the DOJ already has plenty of power. Because Section 230 exempts federal criminal law, it has never hindered the DOJ’s ability to criminally prosecute online services engaging in illegal activity.

The DOJ would also impose onerous obligations that would make it incredibly difficult for any new platform to compete with the handful of dominant platforms that exist today. For example, the proposal requires all services to provide “a reasonable explanation” to every single user whose content is edited, deleted, or otherwise moderated. Even if services could reasonably predict what qualifies as a “reasonable explanation,” many content moderation decisions are not controversial and do not require any explanation, such as when services filter spam.

Sen. Hawley’s Proposed Legislation Turns Section 230’s Legal Shield Into An Invitation to Litigate Every Platform’s Moderation Decisions

Sen. Hawley’s proposed legislation, for its part, takes aim at online speech by fundamentally reversing the role Section 230 plays in how online platforms operate. As written, Section 230 generally protects platforms from lawsuits based either on their users’ content or actions taken by the platforms to remove or edit users’ content.

Hawley’s bill eviscerates those legal protections for large online platforms (platforms that average more than 30 million monthly users or have more than $1.5 billion in global revenue annually), by replacing Section 230’s simple standard with a series of detailed requirements. Platforms that meet those thresholds would have to publish clear policies describing when and under what circumstances they moderate users’ content. They must then enforce those policies in good faith, which the bill defines as acting “with an honest belief and purpose,” observing “fair dealing standards,” and “acting without fraudulent intent.” A platform fails to meet the good faith requirement if it engages in “intentionally selective enforcement” of its policies or by failing to honor public or private promises it makes to users.

Some of this sounds OK on paper—who doesn’t want platforms to be honest? In practice, however, it will be a legal minefield that will inevitably lead to overcensorship. The bill allows individual users to sue platforms they believe did not act in good faith and creates statutory damages of up to $5,000 for violations. It would also permit users’ attorneys to collect their fees and costs in bringing the lawsuits.

In other words, every user who believes a platform's actions were unfair, fraudulent, or otherwise not done in good faith would have a legal claim against a platform. And there would be years of litigation before courts would decide standards for what constitutes good faith under Hawley’s bill.

Given the harsh reality that it is impossible to moderate user-generated content at scale perfectly, or even well, this bill means full employment for lawyers, but little benefit to users. As we’ve said repeatedly, moderating content on a platform with a large volume of users inevitably results in inconsistencies and mistakes, and it disproportionately harms marginalized groups and voices. Further, efforts to automate content moderation create additional problems because machines are terrible at understanding the nuance and context of human speech.

This puts platforms in an impossible position: moderate as best you can, and get sued anyway—or drastically reduce the content you host in the hopes of avoiding litigation. Many platforms will choose the latter course, and avoid hosting any speech that might be controversial.

Like the DOJ’s proposal, the bill also violates the First Amendment. Here, it does so by making distinctions between particular speakers. That distinction would trigger strict scrutiny under the First Amendment, a legal test that requires the government to show that (1) the law furthers a compelling government interest and (2) the law is narrowly tailored to achieve that interest. Sen. Hawley’s bill fails both prongs: although there are legitimate concerns about the dominance of a handful of online platforms and their power to limit Internet users’ speech, there is no evidence that requiring private online platforms to practice good-faith content moderation represents a compelling government interest. Even assuming there is a compelling interest, the bill is not narrowly tailored. Instead, it broadly interferes with platforms’ editorial discretion by subjecting them to endless lawsuits from any individual claiming they were wronged, no matter how frivolous.

As EFF told Congress back in 2019, the creation of Section 230 has ushered in a new era of community and connection on the Internet. People can find friends old and new over the Internet, learn, share ideas, organize, and speak out. Those connections can happen organically, often with no involvement on the part of the platforms where they take place. Consider that some of the most vital modern activist movements—#MeToo, #WomensMarch, #BlackLivesMatter—are universally identified by hashtags. Forcing platforms to overcensor their users, or worse, giving the DOJ more avenues to target platforms it does not like, is never the right decision. We urge Congress to reject both proposals.

Source: Two Different Proposals to Amend Section 230 Share A Similar Goal: Damage Online Users’ Speech
 


Big Tech CEOs faced off with Congress
The leaders behind Amazon, Apple, Facebook and Google testified before Congress virtually

Democrats and Republicans in Congress came out swinging against Amazon, Apple, Facebook and Google on Wednesday, needling the tech giants’ top executives over their size, power and approach to a wide array of issues, including the content they allow online.

Rep. David N. Cicilline (D-R.I.), the chairman of the House’s top antitrust subcommittee, opened the wide-ranging hearing by stressing that lawmakers’ year-long investigation into the industry had informed his growing belief that the country’s largest technology companies have “wielded their power in disruptive, harmful ways,” risking not only competition but the future of democracy itself.

Republicans, meanwhile, have sought to shift the focus of the hearing to allegations that major Silicon Valley companies censor conservatives online, levying charges of political bias that many experts have said are unsubstantiated — and that social media companies denied.

[...]

Cicilline said that all four companies that testified today are monopolies at the conclusion of a more than five-hour grilling of some of the top technology titans.

“These companies as they exist today all have monopoly power,” he said during closing statements. “Some need to be broken up.”

He also suggested that all of the companies need to be regulated. He compared the four chief executives who testified to the modern day versions of Gilded Age tycoons.

“Their control of the marketplace allows them to do whatever it takes to crush independent businesses and expand their own power.”

While speaking to reporters after the hearing, he added that the companies are clearly violating antitrust laws. He said Wednesday’s hearing confirmed evidence that the committee collected in its year-long probe into the companies’ power.

“They’re engaged in behavior that’s anticompetitive which favors their own products and services, which monetizes and weaponizes data, which compromises the privacy of their users and which creates a competitive disadvantage for companies attempting to enter the marketplace,” he said.

Source: https://www.washingtonpost.com/technology/2020/07/29/apple-google-facebook-amazon-congress-hearing/
 
Cicilline said that all four companies that testified today are monopolies at the conclusion of a more than five-hour grilling of some of the top technology titans.

“These companies as they exist today all have monopoly power,” he said during closing statements. “Some need to be broken up.”

He also suggested that all of the companies need to be regulated. He compared the four chief executives who testified to the modern day versions of Gilded Age tycoons.

“Their control of the marketplace allows them to do whatever it takes to crush independent businesses and expand their own power.”

While speaking to reporters after the hearing, he added that the companies are clearly violating antitrust laws. He said Wednesday’s hearing confirmed evidence that the committee collected in its year-long probe into the companies’ power.

“They’re engaged in behavior that’s anticompetitive which favors their own products and services, which monetizes and weaponizes data, which compromises the privacy of their users and which creates a competitive disadvantage for companies attempting to enter the marketplace,” he said.
The House Antitrust Report on Big Tech
Oct. 6, 2020

In a report led by Democrats, lawmakers said Apple, Amazon, Google and Facebook needed to be checked and recommended reforming antitrust laws.

Source: The House Antitrust Report on Big Tech
 
The MESO forum exists, in its current form, in large part thanks to Section 230 of the 1996 Communications Decency Act...

The Complex Debate Over Silicon Valley’s Embrace of Content Moderation
Many in tech cheered when Twitter added labels to President Trump’s tweets. But civil libertarians caution that social media companies are moving into uncharted waters.
In the midst of this notable shift, some civil libertarians are raising a question in an already complicated debate: Any move to moderate content more proactively could eventually be used against speech loved by the people now calling for intervention.​
[...]​
Civil libertarians caution that adding warning labels or additional context to posts raises a range of issues — issues that tech companies until recently had wanted to avoid. New rules often backfire. Fact checks and context, no matter how sober or accurate they are, can be perceived as politically biased. More proactive moderation by the platforms could threaten their special protected legal status. And intervention goes against the apolitical self-image that some in the tech world have.​
[...]​
Section 230 of the federal Communications Decency Act, passed in 1996, shields tech platforms from being held liable for the third-party content that circulates on them. But taking a firmer hand to what appears on their platforms could endanger that protection, most of all, for political reasons.
One of the few things that Democrats and Republicans in Washington agree on is that changes to Section 230 are on the table. Mr. Trump issued an executive order calling for changes to it after Twitter added labels to some of his tweets. Former Vice President Joseph R. Biden Jr., the presumptive Democratic presidential nominee, has also called for changes to Section 230.​
“You repeal this and then we’re in a different world,” said Josh Blackman, a constitutional law professor at the South Texas College of Law Houston. “Once you repeal Section 230, you’re now left with 51 imperfect solutions.”

View attachment 131486

Source: The Complex Debate Over Silicon Valley’s Embrace of Content Moderation

Two Different Proposals to Amend Section 230 Share A Similar Goal: Damage Online Users’ Speech

Whether we know it or not, all Internet users rely on multiple online services to connect, engage, and express themselves online. That means we also rely on 47 U.S.C. § 230 (“Section 230”), which provides important legal protections when platforms offer their services to the public and when they moderate the content that relies on those services, from the proverbial cat video to an incendiary blog post.

Section 230 is an essential legal pillar for online speech. And when powerful people don’t like that speech, or the platforms that host it, the provision becomes a scapegoat for just about every tech-related problem. Over the past few years, those attacks have accelerated; on Wednesday, we saw two of the most dangerous proposals yet, one from the Department of Justice, and the other from Sen. Josh Hawley

The proposals take different approaches, but they both seek to create new legal regimes that will allow public officials or private individuals to bury platforms in litigation simply because they do not like how those platforms offer their services. Basic activities like offering encryption, or editing, removing, or otherwise moderating users’ content could lead to years of legal costs and liability risk. That’s bad for platforms—and for the rest of us.
Repealing Section 230 represents a dangerous threat to online free speech.

 
Repealing Section 230 represents a dangerous threat to online free speech.


IMO Big Tech has already violated Section 230 by placing warning banners on posts they don't agree with. That makes them publishers and legally liable for the content of those banners, and probably makes them ineligible for the immunities Section 230 provides.
 
IMO Big Tech has already violated Section 230 by placing warning banners on posts they don't agree with. That makes them publishers and legally liable for the content of those banners, and probably makes them ineligible for the immunities Section 230 provides.
So why repeal 230?
 

View: https://twitter.com/just_security/status/1362754195670130692


The proposed law that the tech giant is fighting has problems, but Facebook’s removal of news is inexcusable.

This week, Facebook hit the kill switch on news for the 17 million Australian users of its platform. Its action was in response to a proposed law advancing through the Australian Parliament that aims to level the playing field for traditional media organizations in the online environment. But Facebook did not just block traditional media outlets. Thanks to the overbroad definition of “news,” Facebook Pages ranging from hotlines for survivors of sexual assault and domestic violence and Suicide Prevention Australia, through to the national weather service went dark.

Source: Facebook’s Unconscionable Action in Australia – and What It Means for the Rest of the World
 
I hate social media, but you need it in this day and age. So I have a story that happened yesterday regarding censorship of simply freedom of speech:

Dave Palumbo when he’s not promoting his fiberlyze product talks about things that are good and sometimes bad.

He recently posted a video on his Instagram about these pure bred animals recently being held captive by fed ex and ended up dying. My response was simply “Why not just adopt at a local shelter instead? Because at the end of the day, breeders don’t see these animals as anything more than what a shipping company sees them, as products.”

All of a sudden these fucking beta males started storming my comment saying I was trolling and my brain went to my abs and to stfu. They also called me a faggot and a bunch of insults.

Me not giving a fuck about the insults but just decided to participate in the shit posts:
I told the guy who called me the faggot he looks like shit and should stfu and then next thing I end up getting reported by him and all of his buddies resulting in my account being deactivated.

Idk why the fuck that happened but there needs to be some sort of customer service handling things like this, even if they have an email that will take months for them to get back to me on because I’m a nobody, all this automated crap is garbage. As soon as you insult someone back and they get triggered, they end up reporting you or getting a bunch of people to report your account resulting in a deactivation.

Now I see why high profile social media “influencers” either never reply to comments and just delete them, or just say basic things like “thank you”
 
Back
Top