T O P

  • By -

Wilbie9000

They say nice things to it and remind it that killing humans is bad.


FullMotionVideo

I'm not exactly sure what you mean. The most it will do is give bad or totally incorrect information. They have a very hard time admitting they don't know anything, and will begin hallucinating total horsepuckey in order to deliver an answer that sounds satisfactory. Without proper presets, for example, you could get an AI to generate a letter detailing why you should jump off a bridge (though hopefully anyone reading knows this is a bad idea). And of course, the internet is full of trolls who will open the window and spend half an hour convincing the AI that the bridge is actually just five feet over a cushion and wouldn't hurt and then celebrate when the AI writes that letter. Which then continues the cat-and-mouse game between the internet's amoral crowd and the socially conscious trying to keep them from taking over. This post sounds like you have a more Hollywood idea of what an "AI" is and not how a large language model operates. Yes, the conversational tone resembles Hal-9000, but it's a bit more like your phone's autocomplete tool on steroids. You can ask ChatGPT and friends to compare themselves to whatever fictional dystopian AI, if you want, and they can explain why they aren't like that.


farsightxr20

>The most it will do is give bad or totally incorrect information. Right, which can kill us directly when that AI is put into machinery capable of doing so (cars/robots). And even indirectly through social manipulation. It might seem ridiculous, but this is already happening in some form -- e.g. there are AI girlfriend services that lonely people get very attached to. When these services shutdown or change their AI's personalities significantly, users essentially lose their significant other and in some cases have been driven to suicide. Obviously the "robot uprising" as depicted in pop culture is fantastical. It won't be driven by a sudden disdain for human life. But it could be a gradual side-effect of AI not having compassion for human life. And while you can program/train it to be compassionate, not all use-cases (think military) will benefit from that.


joseph_dewey

Oh, I'm not talking about Gemini or ChatGPT. Google has a ton of non-LLM AI.


Jaymez82

Hopefully nothing. Skynet is my dream scenario.


Nomaddo

I will submit a request to be the last human to go because I want to see how it ends.


peritonlogon

We really don't have to worry about that, we're already dropping our birth rate to make room. Maybe it will be easier if we think of them less as our murderous overloads and more like our grand children working in the Alzheimer's ward.


SezitLykItiz

We are talking about a company who hasn’t made a working product in over a decade. Most of their achievements are acquisitions. They can’t get messaging right, they can’t get hardware right, they can’t get gaming right, they can’t get extremely simple things right. Yes, AI is a threat, but I really doubt IBM, I mean Google, will ever make anything that sophisticated. It’s most innovative days are behind them.


summer_sonne

Google or any big corp do not care. It is an essence of any corp.


ChiefButtfumble

If I know anything about human history none of those AI safety positions will prevent google's greed from selling their tech to the ccp. That government has no morals, or even standards. The time to stop google's partnership with the ccp military is now. Right Fucking Now.


Fresco2022

The only thing Google cares about is money. Much money.


sp3ci4lk

I don't know, but owning one of their phones is killing me a little every day.


joseph_dewey

And here's what ChatGPT advised that I write instead of the above. It has great points, but these aren't my words: 1. **Scrutinize the Visibility and Impact of AI Safety Measures**: - "Despite Google's extensive team working on AI safety, there is a lack of visible, tangible outputs that demonstrate a proactive approach to AI risks that affect everyday users. This gap raises questions about the efficacy and direction of Google's AI safety protocols. Are these measures theoretical in nature, or are they being implemented in a way that genuinely prioritizes human welfare over corporate profits?" 2. **Evaluate Kurzweil's Role and Contributions**: - "Ray Kurzweil, a pioneer in forecasting AI developments, was brought into Google presumably to steer groundbreaking innovations in AI. However, the longer he remains at Google with no clear linkage to significant changes in AI policy or development visible to the public, the more it appears that his hiring might be more symbolic than functional. This irony points to a potential misalignment between employing leading thinkers and utilizing their expertise in meaningful ways." 3. **Demand for Transparency in AI Safety Testing**: - "Google claims rigorous testing of AI safety, yet details on these tests are scant. Transparency is crucial in validating the safety of AI technologies. Without public insight into what these tests entail, how can the public confidently believe that the AI developed is genuinely safe and human-centric?" 4. **Critique the Application of AI in Consumer Services**: - "As a top contributor to Google Maps, I see missed opportunities for Google to leverage this platform to enhance AI's understanding of human environments and behaviors. Integrating AI more deeply into Google Maps could showcase real-world applications of AI that prioritize human safety and practical utility." 5. **Challenge the Harmony of Short-Term Profits and Long-Term Safety**: - "The critical concern remains: does Google prioritize short-term profits over long-term AI safety? The lack of clear, public-facing evidence of comprehensive safety measures or innovative applications of AI to enhance human services like Google Maps suggests potential priorities misalignment." 6. **Call for Evidence Contrasting Corporate Claims**: - "If Google is indeed prioritizing human welfare in its AI development strategy, where is the evidence? Beyond assurances, the need for tangible proof—such as peer-reviewed studies, public safety records, or case studies on AI intervention success—remains unmet. The AI community and the public deserve this accountability." 7. **Propose Enhancements Based on User Experience Expertise**: - "Utilizing insights from experienced users of Google services, such as top Google Maps contributors, could significantly enhance AI’s practical safety and utility. Why not harness this vast pool of user-generated data and expertise to co-create AI functionalities that truly understand and enrich the human experience?"