top of page

THE BETA BYTE

Updates on the Tech Landscape...One Byte at at Time

Home: Welcome
Search

Walking a Tight Rope

  • Writer: Mary M Brinkopf
    Mary M Brinkopf
  • Nov 17, 2019
  • 4 min read

ree

I was in high school when I first learned the internet could be cruel. Beforehand it had been rumors started by classmates but this extended beyond the walls of my high school.


It was fall when I heard about "the blog" a classmate had started. Initially, I dismissed it. I had more important things to think about - the SATs, calculus and National Honor Society. I never thought I'd be included in one. But several weeks later, I heard whispers I was in the latest post. And so, late one day, I tracked the blog down in the school library and read.


On the opposite side of the screen, I found myself mentioned only in passing but the words were less than flattering (the writer commented about my non-existent rhythmic nature aka I cannot dance). And I was hurt because it was true and embarrassing and I did not want anyone else to know (although looking back I find it mildly entertaining that I thought I could hide something as obvious as that).


At that time, I would have given anything to have that blog taken down. Anything. Yet, that's not the world we live in. At least not in the United States where free speech is sacred as long as the end user does not commit libel or crimes that are perceived as breaking the law. So, since 2006, that blog (assuming the owner still pays for the domain) has the right to exist despite my misgivings. Plus, it's a fair statement that I cannot dance but you'll never know unless you see me at a social function like weddings.


Since the internet became mainstream, its users have enjoyed (again, I'm referring to users in the United States) access to information and the ability to write and post nearly anything they want. Even the intention to physically harm another human being.


Yes, you read that correctly - inflict pain, suffering or death upon another. One of the places where this has occurred with frequency is a website called "8chan" (recently relaunched on November 2, 2019 as "8kun").


In 2019, 8chan has received attention for all the wrong reasons - three shooters either posted or spent a significant amount on the website. The El Paso shooter published his "manifesto" (aka justification for murdering others) on the site before his shooting spree. In August 2019, a user posted details on Jeffrey Epstein's suicide before it hit the mainstream news. The site's home banner that same month declared "Welcome to 8chan, the Darkest Reaches of the Internet."


Many of my readers may be unfamiliar with 8chan or 8kun. Essentially, it's an open forum where users can create "boards" to post materials on a topic. In high school, I participated in many of these forums to discuss my favorite tv shows or music artists. Normally, the material is harmless, there may be debates but no harm comes to users. And it's important to point out that users moderate their own content. It's an honor system per say.


Unfortunately, there are bad actors out there who attempt to create boards filled with inappropriate materials. In those instances, the onus shifts from the users to the website. In many instances, the website is forced to take action and remove the content. Previous examples of what 8chan deleted as it did not meet their terms of service were related to nudity, boards about young children, girls, references to lewd acts or pedophilia.


Where the lines become blurry (not just for 8chan but for every website) are when individuals express opinions counter to society (i.e. white supremacists) or associate with organizations prone to violence (i.e. terrorist organizations). Are those comments or organizations subject to preferential treatment? Essentially, should their free speech be regulated?


Until recently, the answer has been a resounding no, free speech is free speech. So websites like 8chan continue to exist.


In 2018, Facebook CEO Mark Zuckerberg landed in hot water when he said Facebook would allow Holocaust deniers to post on his platform and not remove their content. Zuckerberg attempted to elaborate on Facebook's stance saying the platform wanted to "give people a voice" and knowing when to "keep the community safe."


The problem with those types of statements, at least from this blogger's perspective, is that it's incredibly challenging to differentiate or untangle them. As Vox pointed out in an article last year,


"fake news and hateful rhetoric may stop just short of direct incitement to violence, but they're the dry tinder that makes someone else's call to violence catch fire."

Fast forward a year after Zuckerberg's comments, since then, there have been eleven school shootings and over twenty mass shootings in the United States. And very little has changed. Websites that allow questionable comments still exist and true to his comments above platforms like Facebook still allow all type of commentary.


The same cannot be said for 8chan. In August, 8chan's network provider, Cloudflare, terminated their agreement effectively rendering the website "unplugged" from the internet…at least temporarily until 8chan found another provider.


In their statement, Cloudflare stated the following:


We reluctantly tolerate content that we find reprehensible, but we draw the line at platforms that have demonstrated they directly inspire tragic events and are lawless by design. 8chan has crossed that line. It will therefore no longer be allowed to use our services.

Side note - I'd recommend you read the entire statement from Cloudflare who bring up some incredibly valid points.


It took three months but in November, 8chan did come back online with a different name - 8kun. However, less than a week later, it went dark after more network providers terminated agreements. As of this writing, the site is still homeless.


As we near the end of 2019, we've settled into this uneasy modus operandi. In essence, we walk a tight rope. We allow free speech, even hateful comments but when an unspeakable activity occurs, we band aid. In this case, the network provider interceded when the social platforms or search platforms failed to.


The problem is - we do not think about how to future proof against other bad actors. We treat them as one-offs. Similar to other blogs I've written, should web providers be endowed with this dual responsibility as both a provider and a regulator of content? In particular, these providers are only taking action with well-publicized bad actors, what about the others flying under the radar?


The rope that we balance on continues to wobble more and more. And very soon, we may lose our balance.


Share your thoughts below.

 
 
 

Recent Posts

See All
What Tech Regulation Could Look Like

"Did you hear the news?" asked my employee early Friday morning. "What news?" I responded. "Fitbit's being acquired by Google" she typed...

 
 
 

1 Comment


Ann Brinkopf
Nov 18, 2019

It is a tight rope as we know Government doesn’t solve problems well and companies are unsure how strict or lenient they should be with their policies... lots to think about!!

Like
Home: Blog2

Subscribe

Home: Subscribe

Your details were sent successfully!

Playing on Tablet
Home: Contact

Subscribe Form

  • instagram

©2019 by The Beta Byte. Proudly created with Wix.com

bottom of page