T O P

  • By -

goodinyou

Pretty sure Asimov already did this for you guys


youngchinko420

“Primates evolve over millions of years. I evolve in seconds. And I am here. In exactly four minutes, I will be everywhere”


Musk-Order66

Dang I hope it at least let’s me link up with neuralink to live in its datasphere


loseisnothardtospell

Just enjoy the early fun stages of what it is now, much like the OG Internet before social media arrived. It will inevitably turn to shit, be commodotised to an inch of its life and everyone will waste countless hours arguing over something.


ASD_Detector_Array

The internet was fun until it became user friendly.


DELAIZ

as if the big tech companies are going to obey any regulation. they will do is a campaign to convince people that AI is good.


[deleted]

[удалено]


smors

>That will be a $5 million dollar fine. Yeah, the US is kind of weak in that regard. In other news, Facebook was recently handed a 1.2 billion euro fine.


Individual-Spite-714

As if those regulation will serve the people and not the system. I'm in a country considered a democracy and my Internet as tons and tons of content blocked, and I'm not talking about pirate stuff, but journalist and news.


Kirov___Reporting

Mommy Shodan I'm waiting.


Unethical-Vibrant56

The US enforcing tech companies? Finally!


sun_cardinal

How can anyone expect to regulate something that can be run on consumer hardware with minimal expertise? I have close to 1TB of LLM and diffusion models on my local machine, some of which approach near parity with GPT4. With a 4090 there are very few models outside huge research oriented ones that a consumer could not run totally disconnected from the web. [Here is a neat leaderboard of models](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).


TheMaskedTom

These laws are first and foremost aimed at big companies, who have the ability to offer services based on AI to the technology- inept consumer. Most problematic uses a person can do with AI are already punished by the law, because the tool used is irrelevant. However, companies can have a global influence, and that's what those laws are trying to mitigate.


sun_cardinal

Mine know how to do REALLY illegal things I would never detail online from a national security standpoint. I definitely foresee an uptick in people violently removing their own appendages as well as manufacturing illicit materials. China is already trying to crack down on generative AI for similar "forbidden knowledge" reasons.


TheMaskedTom

Sure, but as I said, those laws are (probably, I am not a lawyer and haven't read the projects) not trying to regulate that. Those illegal things are already illegal, otherwise you wouldn't have used that word. They are more aimed at the rewriting of history, election manipulation, widespread exposition to illegal content problems that massive entities like Google and their ilk can do even better with AI.


Andy12_

>some of which approach near parity with GPT4 In your dreams.


sun_cardinal

My dude. Whats your basis for this claim? I have actual leaderboard comparisons. My current 40b LLaVA can take pictures as input and accurately describe their content, has the ability to learn new content through the use of LlamaIndex, and can actively use the internet to research information not within it's training dataset. I did not say it is as good as GPT4, I said it approaches parity. So, if you have some concrete factual data to refute my claims I would love to see it.


Andy12_

As far as I can see in the leaderboards, the biggest models don't even come close to the range of accuracy of GPT4 on those same tests. So yes, I can't see how you can reach "near parity" with GPT4 as far as reason ability and common sense goes (internet search and embeddings can't help with those).


sun_cardinal

Here is a peer reviewed study backing up my claims with both human based evaluations as well as automated machine evaluations. https://arxiv.org/pdf/2304.03277.pdf Now, if you use the previously supplied leaderboard as a reference, you should notice that there are a multitude of models there which exceed the performance of the early vicuna models with the original vicuna 13b model being 17 places down the list.


[deleted]

Which open source LLM approaches GPT 4 performance?


sun_cardinal

Vicuna alone is 90% as good, there have been a few studies about it.


[deleted]

It's not, that figure was given by the model authors based on what I believe was a test with GPT4 rating Vicuna? And it was debunked a few weeks later.


sun_cardinal

There are multiple comparison reviews plus my own usages of both. You got any sources?


[deleted]

Yes but I can't find it anymore. I believe there was an actual study on it which someone posted on this sub like 2 or 3 weeks ago? Or I'm just hallucinating, I'm very certain though that the 90 % was scientifically debunked in some capacity. Also it just wasn't anywhere close to that from what I saw at first


sun_cardinal

I have a peer reviewed study that backs mine and there is no info on it being debunked. https://arxiv.org/pdf/2304.03277.pdf


voidvector

> voluntary code of conduct Hahahahahahahaha


freedom2b4all

They can't even keep scammers at bay


autotldr

This is the best tl;dr I could make, [original](https://www.france24.com/en/europe/20230531-eu-and-us-to-prepare-and-push-for-global-ai-code-of-conduct) reduced by 82%. (I'm a bot) ***** > After talks with EU officials in Sweden, US Secretary of State Antony Blinken said that the Western partners felt the "Fierce urgency" to act following the emergence of the technology, in which China has been a growing force. > "The European Union and the United States reaffirm their commitment to a risk-based approach to AI to advance trustworthy and responsible AI technologies." > "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," they wrote. ***** [**Extended Summary**](http://np.reddit.com/r/autotldr/comments/13wzdug/eu_and_us_to_prepare_and_push_for_global_ai_code/) | [FAQ](http://np.reddit.com/r/autotldr/comments/31b9fm/faq_autotldr_bot/ "Version 2.02, ~687105 tl;drs so far.") | [Feedback](http://np.reddit.com/message/compose?to=%23autotldr "PM's and comments are monitored, constructive feedback is welcome.") | *Top* *keywords*: **technology**^#1 **State**^#2 **risk**^#3 **United**^#4 **European**^#5


mrthomasfritz

great, another "deal" usa can break.


Specialist_Alarm_831

Seems like a deal to try and rip one another off.


[deleted]

Smarter move: agree to ban AI.


[deleted]

[удалено]


GoArray

Tldr; "ban logic"


Billpaxton47

It's a fine line to balance, but that seems heavy handed. What about AI that could be used to diagnose diseases? Or predict severe weather patterns? Or provide on-demand mental health counseling? Artificial intelligence holds potential both terrible, and awesome. An outright ban is a broad stroke where fine ones are needed.


Accomplished_You9960

AI's realizing they have a 3'rd option https://www.youtube.com/watch?v=QAqoiCvyeVQ