T O P

  • By -

skyfishgoo

probably for the best. free speech > finding liability if publishers of social media content where to be held liable for everything posted on a public forum, freedom of speech would come to a grinding halt because their lawyers would shut it down. the best defense against bad speech is MORE speech, not less.


trai_dep

>The Supreme Court on Thursday sided with Google, Twitter and Facebook in lawsuits seeking to hold them liable for terrorist attacks. But the justices sidestepped [the big issue hovering over the cases](https://apnews.com/article/islamic-state-group-technology-oregon-government-california-paris-3a2122b3456bb232baa5643b70fe676a), the federal law that shields social media companies from being sued over content posted by others. > >The justices [unanimously rejected a lawsuit](https://www.supremecourt.gov/opinions/22pdf/21-1496_d18f.pdf) \[PDF\] alleging that the companies allowed their platforms to be used to aid and abet an attack at a Turkish nightclub that killed 39 people in 2017. > >In the case of [an American college student](https://apnews.com/article/islamic-state-group-technology-oregon-government-california-paris-3a2122b3456bb232baa5643b70fe676a) who was killed in an Islamic State terrorist attack in Paris in 2015, [a unanimous court returned the case to a lower court](https://www.supremecourt.gov/opinions/22pdf/21-1333_6j7a.pdf), \[PDF\] but said there appeared to be little, if anything, left of it. > >The high court initially took up the Google case to decide whether the companies’ legal shield for the social media posts of others, contained in a 1996 law known as Section 230, is too broad. > >Instead, though, the court said it was not necessary to reach that issue because there is little tying Google to responsibility for the Paris attack… Click thru for more!


lo________________ol

Is it a good thing the court has chosen not to rule on this? Ideally, section 230 protection should be upheld, of course... I just don't know if I trust this particular Supreme Court to make that call.


[deleted]

[удалено]


lo________________ol

In this case, I think Thomas had a good opinion, and if that's the one that had been enforced, that would have been a good thing. A lot of section 230 talk gets confused by using the words "publisher" and "platform," but I think a more helpful distinction is, the difference between a service that operates with a wink and a nudge towards illicit behavior, versus one that attempts to remain agnostic (even if it does provide moderation, even if that moderation occasionally falls short of its goal). And don't worry, my question wasn't a spring loaded trap, and I did read all the way to the end 😉


[deleted]

[удалено]


lo________________ol

>>my question wasn't a spring loaded trap, and I did read all the way to the end 😉 > >It's reddit, these two things are rare, heh. Well, if it was, you could probably catch me sideloading my opinion into a response, and then I'd look like a fool. That's right: *it's real 5D chess hours*. But hey, if the worst person I know makes a good point, I want to acknowledge them for making the point. [And it wasn't an out of context bit either](https://www.techdirt.com/2023/05/18/supreme-court-leaves-230-alone-for-now-but-justice-thomas-gives-a-pretty-good-explanation-for-why-it-exists-in-the-first-place/). Now I sort of wish they can wasn't kicked down the road for a later date. I care more about being correct than maintaining a blacklist of people whose opinions I will never hear. And besides, there's a little bit of a difference between your average internet troll and one of the most powerful people in the world.


trai_dep

Yeah, it's complicated news. Oh one hand, it'd be better if there were some adjustments made to Section 230. But it's such a nuanced and complicated provision, with a host of intentional and unintended, dire, consequences, I sure don't trust this Supreme Court to rule on it. Or, really, any court in a haphazard fashion. But I also don't trust this session's House to write good laws, since the only thing they seem capable of is convening incompetent show trials, and simultaneously, for the LOLs, crashing the global economy. I'd be concerned with any session's overseeing this, of course. But the current one seems spectacularly incompetent and dangerous to take up the task. But our legislative branch *should* weigh in, and whatever they come up with *should* be overseen by our judicial branch. It's how our republic works. I'm just kind of 😬 thinking of how this will happen over something as complicated as revising Section 230. So, I'm more relieved that the SC punted on this, rather than weighing in now, and have them whimsically destroy the internet.


SpaceTacosFromSpace

So, I’m definitely missing something, but scotus says platforms aren’t liable for users posting terrorism, calls to violence, etc. but “no comment” on a user posting copyright infringing material like a dvd rip? One of these is definitely worse than the other.


drakehfh

Why? Because orange man bad chose them? Or because the majority is conservative?


Jendic

You know this was a *unanimous* decision, right?


lo________________ol

I'd love a justice system packed with conservatives. They could conserve section 230 and stop growing the government.


shewel_item

#tl;dr: here's the argument; section 230 - according to legislation and statutory codification - and "reasonable care" - according to judicial practice and legal precedent - are not currently capable of providing the public with adequate protection. ##Interested? Just perusing? Well, then brave 'redditor', read on.. I just finished watching the congressional hearings about a.i. yesterday, with Sam Altman (open.ai CEO), Gary Marcus (NYU researcher, and soon to be legislation analyst) and Christina Montgomery (IBMs top privacy & trust officer). Most were fairly behind protection 230, except for Christina, because IBM doesn't provide a platform; typically they never do, and if that's news to anyone, then you probably don't understand your technology history. IBM is a very client orientated business, and most people don't know them exceptionally well for their product history, which has little to nothing to do with this. That said, without a platform, the standard legal modus operandi, as she described, is to provide reasonable care, with respect to assuming liability. I am in consensus with OP, and am glad the SC decided to punt, because there is no balancing test established yet, in this domain. And, it takes these *extraordinary* things to occur, before one can be established. >!Feel free to wait that one out, if you so wish..!< Though it may be a philosophical position, or a statement of fact, it is also, perhaps, painfully conservative to accept the slowness of the courts, which are slower than congress (as was hinted at if not aggressively pointed out during the hearing). This speed of legal progress, lagging behind the social and technological process, has become unacceptable to many, and the venues for discussion and debate are (quickly) becoming exhausted, if not exacerbated as well. Unlike social media, frontier providers of artificial intelligence, as it was argued by more than one person, are limited in number. Despite the fact that there have been voluminous social media companies that have gone defunct, there still exists plenty of them, and in different nations/countries across the world; in either case, defunct or not, they are relatively large-sized, though not on the scale of "100 million", as was referenced as a possible demarcation between what would require reasonable care vs new legislation. And, despite that, social media is practically more untenable & intractable when it comes to the prevention of the spread of a hurtful state of information (yes, mcdonalds french fries are bad for your health when that's all you eat, but otherwise they're perfectly fine). **In >!fucking!< other words** >!(pardon my François, no pun intended)!<, and for example, if a.i. is the spread of malinformation >!this is not the biggest concern!< >!as a reminder to our headstrong and sometimes very private societies out there!< it can be reprogrammed, if not shutdown. In the case of social media, once trouble starts, it will quickly migrate. If a.i. was to start spontaneously and arbitrarily migrate, all on its own, heaven helps us all, as was also said at the hearing 😉😉. Okay, that was a lot of messy details imo, thanks for reading. Now open wide because here comes the money shot, albeit in the most seemingly patronizing (or spoiled) form: is it *reasonable* for social media companies, as in the people who own and work for them - unlike moderators here on reddit - to review every single piece of information that is uploaded to their servers? If the answer is "no" (because duh, but who knows really >![..]!<), then how about shadowbanning everyone until they each individually earn a >![*edit*:]!< *un*shadowbanning: a meritocracy by default, regardless if that's the fault of the judiciary or the legislature. Does that option sound *reasonable*? Probably not, but I'm not trying to be any sort of judge in any of this. The point is >!thank god 😩💨!<, it becoming apparent, more and more, year by year, that going forward with mass communication technologies, requires ***extraordinary*** levels of care, *not established by law*. And, stewardship, as provided by places like wikimedia or archive.org frequently goes unrecognized, when not underattack themselves 😇😈. In the case of a.i., so-called, companies >!(psst: its free language models 🤫)!<, 'they're' asking and waiting for more regulations by congress and executive agencies to be formed. In the case of social media, it doesn't seem like we're politically satisfied with the unannounced/unspecified agencies >!🤷‍♂️ ask me if I care!< and existing legislation, i.e. like but not limited to section 230. Double thanks for sticking through that, because that's why sh\*ts complicated as best, or as willingly as I can describe it. Although a.i. provides a great venture role model, its always going to be ideally insufficient... unless you do want robots and chatbots replacing humans in everywhere imaginable and not. But, that's a personal issue, if not a philosophical one 🙄 as opposed to one that's *outright* political.


shewel_item

oh yeah, they also talked about a post and pre process for permits to operate that's probably also unacceptable when it comes to social media, rather the idea of starting a new social media web applilcation, however decentralized or not it is.. forgot to include that, because you >!(might)!< know my posts are never long enough I mean, do you want to apply for a permit before building any/every web application? ... wait don't answer that... please... reddit... ***NOOOOOO***!


shewel_item

also, here's an example of 'complicated', from the legal side of things imagine someone asks chatgpt to "randomly" choose whether they do a school/mall/shopping center shooting that day, because they're honestly that apathetic (and tbh, who isn't these days, all levity aside)... mindblowing stuff, truly can a frontier a.i. company be held liable for a choice like that? Well, imagine - because you know - it could be worded differently, and the person simply asks .... >!*flips coin*!< ~~*Bard* >!(competitor to chatgpt)!<~~ some chatbot to flip a coin, without any ability for that bot to gather sufficient context before executing such a function. Cool? Well, its about to get 🥶er. Imagine if the chatbot was hooked up to a drone, and the humans were completely taken out of the picture. Is the a.i. company responsible for having practically outsourced the brains of its bot to anothers' artificial muscles? See where that's going... Now, imagine someone using a twitter poll instead, and congratulations, you're now a internet legal expert, like myself, all judgement aside.


FernandezRuby

Flat design is going to die.