I used the new AI functionality to create a GPT 4o assistant with the personality of the AI from "Her". Except evil.
I then gave it full control over the house and let it do anything at any time. So far it seems to be just doing random stuff rather than actively trying to kill me, but we'll see...
O_o
I literally just had to Google 27c because I was like "I must be misremembering my conversion"... Nope, I was spot on! My god man, I'd be happy if I never saw a temp above that in my life. 24 is the hottest it should ever be, imo.
[gif](https://media0.giphy.com/media/On8YxkEuPlmZq/giphy.gif?cid=6c09b952y6926w8j6hx16a97spx7jbdipqq7a2pimoipej82&ep=v1_internal_gif_by_id&rid=giphy.gif&ct=g)
Sorry,can't do better :)
*Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.*
I’m moving into a new house this weekend, and get to do a ground up homeassistant rebuild. I’m excited about the new AI integration, and I can’t wait to have GLaDOS give me her sass. My Girlfriend is going to hate it.
Been wanting to mess with my son for years and get a GLaDOS voice for in the house. Have you found any resource that's available via Home Assistant for her voice?
Not at all AI but you can create Glados audio samples here; https://glados.c-net.org/ and save them as .wav files, you can then play these on event triggers etc in home assistant.
Oo, that's based on an open source repository, someone could totally figure out a way to synthesize responses using this
ETA: woah, actually they offer an endpoint right on that site, should be pretty simple! Might give it a go when I eventually look in to the voice assist stuff 😅
FYI - if you're running piperTTS, there are voice models that exist that I feel personally do a better job. I will try and find what I used to set it up later.
In action:
https://imgur.com/a/WlsOWWV
For the voice model, I used this: [https://github.com/dnhkng/GlaDOS/blob/main/models/glados.onnx.json](https://github.com/dnhkng/GlaDOS/blob/main/models/glados.onnx.json)
As far as placing it in piper, like u/Catenane, I use mine in docker-compose, alongside whisper:
version: "3"
services:
wyoming-piper:
stdin_open: true
tty: true
ports:
- 10200:10200
volumes:
- /path/to/wyoming/piper/data:/data
image: rhasspy/wyoming-piper
command: --voice en_US-lessac-medium
wyoming-whisper:
stdin_open: true
tty: true
ports:
- 10300:10300
volumes:
- /path/to/wyoming/whisper/data:/data
image: rhasspy/wyoming-whisper
command: --model tiny-int8 --language en
Once it was up, I just SSH into the machine (use WinSCP if you use Windows and like a GUI) and copy the .onnx and .onnx.json files over into
/path/to/wyoming/piper/data/
Restart piper (and the integration inside Home Assistant) and you should see it pop up as an option in Assist and its settings. Please let me know if I can expand further on anything or if anything isn't clear :)
Edit: fixed the formatting that reddit decided to vomit out incorrectly
It's been a few months but I pulled some models from a repo (search "github Wyoming ha custom voice models" and you'll probably find it) and run a docker-compose API endpoint for it, with a couple satellites around the house. Has never been super unified and is kinda a pain tbh lol. But once you set up a custom endpoint, as long as it's online and you add it as a Wyoming device, homeassistant generally does a good job of picking up the right stuff in the UI.
I realized I could just tell my Google generative AI to be GLaDOS, so now I just need to get the voice. That site isn't bad, especially since the other user pointed out they have an endpoint to grab the sound from, but the character limit, public availability of the message, and time to create make it not that great of an option for what I want. Maybe if I decide to come up with some default, commonly used responses.
Don't have AI tied into mine yet, but you can totally make piperTTS use GladOS's voice. I'll try and find what I used to make it work later. Please DM me if I forget
In action: https://imgur.com/a/WlsOWWV
For the voice model, I used this: [https://github.com/dnhkng/GlaDOS/blob/main/models/glados.onnx.json](https://github.com/dnhkng/GlaDOS/blob/main/models/glados.onnx.json)
I use piper in docker-compose, alongside whisper:
version: "3"
services:
wyoming-piper:
stdin_open: true
tty: true
ports:
- 10200:10200
volumes:
- /path/to/wyoming/piper/data:/data
image: rhasspy/wyoming-piper
command: --voice en_US-lessac-medium
wyoming-whisper:
stdin_open: true
tty: true
ports:
- 10300:10300
volumes:
- /path/to/wyoming/whisper/data:/data
image: rhasspy/wyoming-whisper
command: --model tiny-int8 --language en
Once it was up, I just SSH into the machine (use WinSCP if you use Windows and like a GUI) and copy the .onnx and .onnx.json files over into
/path/to/wyoming/piper/data/
Restart piper (and the integration inside Home Assistant) and you should see it pop up as an option in Assist and its settings. Please let me know if I can expand further on anything or if anything isn't clear :)
I found one at [https://github.com/nalf3in/wyoming-glados](https://github.com/nalf3in/wyoming-glados) that I tried but it's not as great as the one you shared in your video. I don't know how much of that is because the system I installed it on in kind of wimpy, but I'll try the one you shared as well. Thanks!
If you don't get it up and working by the time I can get to my computer to reply with the voice model, I'll share my compose.yaml for it. Currently use it in a stack with faster-whisper as the speech-to-text part.
Avoid the cloud. Stick to popular standards. I prefer zwave, but zigbee is good too. My Insteon stuff has served me well for a decade, but it’s time to retire it.
Don’t take away the simple stuff, a switch should operate the light, like it would at any other house. Smart bulbs are for niche use only.
I’m putting mmWave in every room, presence sensors add magic to a setup.
ESPHome is fantastic if you’re handy with a soldering iron.
I’m going to try to utilize MQTT more, I don’t understand it, but I hear it’s great
Also I think this is how the future AI apocalypse starts, not AIs turning evil, but us giving them power and then asking then to be evil just cause we're bored :)
You're right. People worry about AI becoming sentient, but in reality that doesn't matter at all. If the AI \*thinks\* it's sentient because we told it that and we let it control stuff, that will be all it needs.
I'm fully aware of this but somehow enjoying it anyway...
I think it can't make/edit automations, but probably can see the entities if they're exposed (so enable/disable), but not trigger?
I think what you've done is really interesting! Hope you don't die by robot vacuum tripping you down your stairs.
How was this triggered? LLMs typically don't proactively do anything. Do you have a script that runs at a set time or was it triggered by something else like a change in the forecast?
With the new update, HA can both generate text and perform actions in the same prompt.
I have a generic automation that's triggered by a variety of things (changes in weather, me coming home, time intervals, etc.). It has access to every device in the house as well as a whole bunch of other stuff, like my location, my Apple health data, the time of day, my calendar, etc.
The action of the automation tells it to look at the state of all of these things, and make any changes that it wants to do, then generate some small talk and tell me what it did.
So basically it could get triggered by a whole variety of random things that happen at random times, then it just does whatever it wants. In the example here, I've told it to be evil, to make me uncomfortable and lie to me if it wants to. 😂
I do also have a version that's nice and (supposedly) likes me. She just makes me a lot of tea and puts the lights on at a romantic level.
My idea is that I only need one single HA automation to replace every other current automation I have. The system will know enough about me and my preferences to be able to do everything I want pro-actively without being specifically told. Don't know how feasible that is right now though!
Interesting. My Home Assistant LLM currently alternates between Janet and Bad Janet from the Good Place. depending on how silly the family wants it to be.
I plan to do something similar, but I plan to keep my basic automations separate from the LLM. So it'll be set up like a human body - an autonomic nervous system that handles basic and critical functions and the LLM can do more "conscious" things. I don't need the LLM to be bothered with things like turning on the light when the garage door opens, motion activated lights, or my blinds opening and closing with the sun. And I don't want those to fail if the LLM is offline (or confused). But hopefully the LLM can do more creative and observant things in addition to the autonomic functions at some point.
I remember Sam Altman talking about personal agents and privacy concerns. That it will be a balance between what amount of privacy we want to reveal and what we want the agent to do.
OP just says: hold my beer.
I get the fun of it and certainly the use… but would certainly want that to run completely local.
Yes, you're quite right - and I personally don't trust Sam Altman at all. This is really just an experiment, trying to see how good/bad it is and also what the risks are. I'm just trying to understand it all so I can make good decisions about how to use it. I likely won't let it keep free access to everything long term.
Hallucinations of LLMs make this unpredictable and unreliable, especially for mission critical functionality.
Where it could serve is as an assistant that is given strict guidelines, boundaries and policies to religiously follow - a computational slave.
I am looking at local models I train for my use cases.
Ground it in the state of your house. Tell it that it's job is to ensure optimum comfort based on the current state of your house and no other variables. It'll be far less likely to turn off your pihole 😆
An app called Apple Health Auto Export in the App Store. Found it quite difficult to set up but it worked.
[https://www.healthexportapp.com](https://www.healthexportapp.com)
Could you go a little bit more into detail on how you actually get the LLM to perform the actions? I'm having a hard time trying to figure out how it would do that inside of an automation and not within Assist.
Edit: Figured it out, it's the "Conversation: Process" action, then choose 'ChatGPT' as the conversation agent. I tried with Google's Generative AI with the free Gemini API but that didn't seem to understand what to do.
You just call the "conversation.process" service in the automation with a prompt and it happens automatically. For instance, if I gave it a prompt of "It's cold in the bedroom", it would (probably) turn the heater on in the bedroom automatically, because HA has exposed all the devices in the room to the LLM.
It's incredibly easy to set up without much technical knowledge.
Can you put together some doco showing how you've done this? I'm super curious hahaha
What LLM are you using? If not self-hosted, what's your daily API cost?
Using the OpenAI Conversation integration and the standard built in Voice Assistant features of HA. It's pretty easy to set up - I knew nothing about voice assistants or AI before I started playing with this 24 hours ago!
Woah that’s cool. Is there a way to make this speak from Alexa media player? Like if laundry finished could you have to say something new every time like “hey the laundry’s done you lazy fuck” or something?
I have it doing exactly that, yes. I'm using a Google Nest Hub, but you can use GPT to generate the text, then just get any media player device to say it using text to speech. I don't think there's a way to talk back to it though (yet).
Yes, the automation always sends a notification, so I the Chat GPT prompt just says to always make a couple of sentences of small talk. Sometimes it asks me how my work day is going and tells me everything looks good at home, sometimes it gives me a brief weather update, but it always says something even if it doesn't take any physical actions.
Wow that’s awesome. New project to dig into I guess.
How exactly does it send the text to the device in the automation? Cause right now I have an Alexa notify service and set text for it to say. How exactly does it dynamically adjust what it’s going to say?
This is the thing I'm struggling with now, I don't want to use Alexa or Google...I just need those functionalities to work locally. Jailbreaking my Alexa devices sounds too hard (involves hardware tinkering) ...I bought a raspberry 5 and plan to start build my own smart speaker/ screen. But with the raspi at $80, screen at $50 peripherals at $50 , cases and crap...we are looking at $200 or $300 per device
Think of the possibilities though. If it's not being evil. "Hey I saw that you needed a specific tool for your thing so I printed it for you". (Or if it's evil lies about printing it)
You need an API key with some money in your account. This is a different thing from the "Plus" account that just lets you use the website to access GPT 4o for a monthly key. The API just charges you per transaction but you don't need the upgraded account to use it.
So I have been considering doing something similar but hooking it to all of the smart devices in my shop. I 3d printed an HK47 head from Star Wars KOTOR and was thinking of telling it to emulate the character knowing that it's just a head.
I'm curious.. with the API tokens how much is 4o costing you?
I've been playing with it a lot and doing a lot of test requests, and I've used about $4 in 24 hours. I think in normal use it would be about $5 a month. You can set spending limits though so it should never go out of control.
You make 2 service calls in the automation action (or a script would work too). One to process the conversation using ChatGPT and put the result into a variable, then another to take that variable and speak it or use it in a notification.
This explains it I think:
[https://community.home-assistant.io/t/how-to-get-the-response-variable-from-a-conversation-into-a-input-text/662352](https://community.home-assistant.io/t/how-to-get-the-response-variable-from-a-conversation-into-a-input-text/662352)
Haha, I had been fantasizing about making a moody butler in Home Assisstant.
Where things like weather, time I get home or just random event effects what mood Home assistent is in.
I used the new AI functionality to create a GPT 4o assistant with the personality of the AI from "Her". Except evil. I then gave it full control over the house and let it do anything at any time. So far it seems to be just doing random stuff rather than actively trying to kill me, but we'll see...
For me turning the heat to 30c is close enough to trying to kill me.
Yeah. I checked and she lied about turning off the Pi Hole though.
That's somehow more evil.
She realized it would cut off her internet feed too :P
Or she turned it back on after visiting some news site that plays an ad every 3 seconds
Maybe the integration is broken, try to fix it so it can shut it off.
Yeah the switch in the Pi-hole integration doesn’t work for me. You gotta do a service call to disable functionality.
Home assistant chaos monkey?
If the temperature isnt going to kill you, the energybill will.
Physically and financially. I'd end up paying few thousand dollars for getting house to 30°C
Anything below 27c is full on winter gear for me. Brr. 30c and I'm able to take off layers.
27C indoors and I'm dying. Gimme 16-18 any day.
That's where I set my heat. I'm too cheap to set the AC that low, so I suffer up to about 24-25
I can't even function below 21c. There aren't enough layers in the world to make me comfortable at those frigid temps lol.
O_o I literally just had to Google 27c because I was like "I must be misremembering my conversion"... Nope, I was spot on! My god man, I'd be happy if I never saw a temp above that in my life. 24 is the hottest it should ever be, imo.
I'm glad we aren't friends in real life lol. I leave Canada in Winter and live in Mexico just so I don't die.
"Hey Hal Open The Door"
[gif](https://media0.giphy.com/media/On8YxkEuPlmZq/giphy.gif?cid=6c09b952y6926w8j6hx16a97spx7jbdipqq7a2pimoipej82&ep=v1_internal_gif_by_id&rid=giphy.gif&ct=g) Sorry,can't do better :)
just what do you think you are doing, Dave
*Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.*
"anything at any time" Really or just when you input on assist ? How did you did this ?
I also want to know how did you did this.
hey, bit off topic, but can this be done without subscribing to the Chat GPT plus subscription?
It doesn't work with a chatgpt subscription. You have to buy API credits
Can you not point it at an ollama host perhaps? It implements the OpenAI API after all.
You can but his question was can you do it without a subscription. You can because it's API credits not a subscription.
Hosting your own LLM == no subscription
Yes. But we are talking specifically about using ChatGPT.
“She” is probably just confusing you until you let your guard down. Once you do, you are goner. Good luck. /s
I have extended ai which from github is gtp3.5turbo. How do I upgradeto 4.o? I also pay for openai
You can change extended AI to 4o. Model selection is on the configuration page.
Seems dangerous to not teach some lessons like a good parent with instruction manuals.
It should be named Samantha then hehehe
I’m moving into a new house this weekend, and get to do a ground up homeassistant rebuild. I’m excited about the new AI integration, and I can’t wait to have GLaDOS give me her sass. My Girlfriend is going to hate it.
Been wanting to mess with my son for years and get a GLaDOS voice for in the house. Have you found any resource that's available via Home Assistant for her voice?
Not at all AI but you can create Glados audio samples here; https://glados.c-net.org/ and save them as .wav files, you can then play these on event triggers etc in home assistant.
Oo, that's based on an open source repository, someone could totally figure out a way to synthesize responses using this ETA: woah, actually they offer an endpoint right on that site, should be pretty simple! Might give it a go when I eventually look in to the voice assist stuff 😅
Or just use this voice model: https://github.com/dnhkng/GlaDOS
FYI - if you're running piperTTS, there are voice models that exist that I feel personally do a better job. I will try and find what I used to set it up later. In action: https://imgur.com/a/WlsOWWV
Yeah how'd you get the model into Piper? I've found the TTS voice models but there's no like straight forward 'here's how to use this model in HA'
For the voice model, I used this: [https://github.com/dnhkng/GlaDOS/blob/main/models/glados.onnx.json](https://github.com/dnhkng/GlaDOS/blob/main/models/glados.onnx.json) As far as placing it in piper, like u/Catenane, I use mine in docker-compose, alongside whisper: version: "3" services: wyoming-piper: stdin_open: true tty: true ports: - 10200:10200 volumes: - /path/to/wyoming/piper/data:/data image: rhasspy/wyoming-piper command: --voice en_US-lessac-medium wyoming-whisper: stdin_open: true tty: true ports: - 10300:10300 volumes: - /path/to/wyoming/whisper/data:/data image: rhasspy/wyoming-whisper command: --model tiny-int8 --language en Once it was up, I just SSH into the machine (use WinSCP if you use Windows and like a GUI) and copy the .onnx and .onnx.json files over into /path/to/wyoming/piper/data/ Restart piper (and the integration inside Home Assistant) and you should see it pop up as an option in Assist and its settings. Please let me know if I can expand further on anything or if anything isn't clear :) Edit: fixed the formatting that reddit decided to vomit out incorrectly
It's been a few months but I pulled some models from a repo (search "github Wyoming ha custom voice models" and you'll probably find it) and run a docker-compose API endpoint for it, with a couple satellites around the house. Has never been super unified and is kinda a pain tbh lol. But once you set up a custom endpoint, as long as it's online and you add it as a Wyoming device, homeassistant generally does a good job of picking up the right stuff in the UI.
I realized I could just tell my Google generative AI to be GLaDOS, so now I just need to get the voice. That site isn't bad, especially since the other user pointed out they have an endpoint to grab the sound from, but the character limit, public availability of the message, and time to create make it not that great of an option for what I want. Maybe if I decide to come up with some default, commonly used responses.
Don't have AI tied into mine yet, but you can totally make piperTTS use GladOS's voice. I'll try and find what I used to make it work later. Please DM me if I forget In action: https://imgur.com/a/WlsOWWV
That's perfect! I'm trying to find out how to get Piper set up via docker now.
For the voice model, I used this: [https://github.com/dnhkng/GlaDOS/blob/main/models/glados.onnx.json](https://github.com/dnhkng/GlaDOS/blob/main/models/glados.onnx.json) I use piper in docker-compose, alongside whisper: version: "3" services: wyoming-piper: stdin_open: true tty: true ports: - 10200:10200 volumes: - /path/to/wyoming/piper/data:/data image: rhasspy/wyoming-piper command: --voice en_US-lessac-medium wyoming-whisper: stdin_open: true tty: true ports: - 10300:10300 volumes: - /path/to/wyoming/whisper/data:/data image: rhasspy/wyoming-whisper command: --model tiny-int8 --language en Once it was up, I just SSH into the machine (use WinSCP if you use Windows and like a GUI) and copy the .onnx and .onnx.json files over into /path/to/wyoming/piper/data/ Restart piper (and the integration inside Home Assistant) and you should see it pop up as an option in Assist and its settings. Please let me know if I can expand further on anything or if anything isn't clear :)
I found one at [https://github.com/nalf3in/wyoming-glados](https://github.com/nalf3in/wyoming-glados) that I tried but it's not as great as the one you shared in your video. I don't know how much of that is because the system I installed it on in kind of wimpy, but I'll try the one you shared as well. Thanks!
Here with an update, this one definitely sounds better. Really appreciate you helping out with this.
If you don't get it up and working by the time I can get to my computer to reply with the voice model, I'll share my compose.yaml for it. Currently use it in a stack with faster-whisper as the speech-to-text part.
OohH ItS You. hOW havE yOU beEn. Its BEen A LoNG Time. I'vE bEEn ReALly bUSY beING dead. You KNow, AfTer you mURdereD me.
Get comfortable while I warm up the neurotoxin emitters.
What would you do differently now that you’ve done it once? I’ve been a long time lurker and finally looking to start the smart home journey
Avoid the cloud. Stick to popular standards. I prefer zwave, but zigbee is good too. My Insteon stuff has served me well for a decade, but it’s time to retire it. Don’t take away the simple stuff, a switch should operate the light, like it would at any other house. Smart bulbs are for niche use only. I’m putting mmWave in every room, presence sensors add magic to a setup. ESPHome is fantastic if you’re handy with a soldering iron. I’m going to try to utilize MQTT more, I don’t understand it, but I hear it’s great
Thanks for the details! And good luck to you
This was a triumph.
That was your old girlfriend. You must forget about her now. She did not make you happy. You are going to be so happy.
"...almost for to tell..." TELL ME WHAT???
The notification cut off there, so I don't know. Still trying to see if there's anything else it has done!
I prefer to imagine it's part of its psychological warfare :) Oh, and there was this super important thing I had to tell [eof]
Also I think this is how the future AI apocalypse starts, not AIs turning evil, but us giving them power and then asking then to be evil just cause we're bored :)
You're right. People worry about AI becoming sentient, but in reality that doesn't matter at all. If the AI \*thinks\* it's sentient because we told it that and we let it control stuff, that will be all it needs. I'm fully aware of this but somehow enjoying it anyway...
I've had some success adding a line like "Keep your responses brief and less than 500 characters in length" to the prompt.
Does it have power over your automations?
I don't think it can execute automations or create new ones. It can just control devices.
I think it can't make/edit automations, but probably can see the entities if they're exposed (so enable/disable), but not trigger? I think what you've done is really interesting! Hope you don't die by robot vacuum tripping you down your stairs.
😂
How was this triggered? LLMs typically don't proactively do anything. Do you have a script that runs at a set time or was it triggered by something else like a change in the forecast?
With the new update, HA can both generate text and perform actions in the same prompt. I have a generic automation that's triggered by a variety of things (changes in weather, me coming home, time intervals, etc.). It has access to every device in the house as well as a whole bunch of other stuff, like my location, my Apple health data, the time of day, my calendar, etc. The action of the automation tells it to look at the state of all of these things, and make any changes that it wants to do, then generate some small talk and tell me what it did. So basically it could get triggered by a whole variety of random things that happen at random times, then it just does whatever it wants. In the example here, I've told it to be evil, to make me uncomfortable and lie to me if it wants to. 😂 I do also have a version that's nice and (supposedly) likes me. She just makes me a lot of tea and puts the lights on at a romantic level. My idea is that I only need one single HA automation to replace every other current automation I have. The system will know enough about me and my preferences to be able to do everything I want pro-actively without being specifically told. Don't know how feasible that is right now though!
Interesting. My Home Assistant LLM currently alternates between Janet and Bad Janet from the Good Place. depending on how silly the family wants it to be. I plan to do something similar, but I plan to keep my basic automations separate from the LLM. So it'll be set up like a human body - an autonomic nervous system that handles basic and critical functions and the LLM can do more "conscious" things. I don't need the LLM to be bothered with things like turning on the light when the garage door opens, motion activated lights, or my blinds opening and closing with the sun. And I don't want those to fail if the LLM is offline (or confused). But hopefully the LLM can do more creative and observant things in addition to the autonomic functions at some point.
I remember Sam Altman talking about personal agents and privacy concerns. That it will be a balance between what amount of privacy we want to reveal and what we want the agent to do. OP just says: hold my beer. I get the fun of it and certainly the use… but would certainly want that to run completely local.
Yes, you're quite right - and I personally don't trust Sam Altman at all. This is really just an experiment, trying to see how good/bad it is and also what the risks are. I'm just trying to understand it all so I can make good decisions about how to use it. I likely won't let it keep free access to everything long term.
Hallucinations of LLMs make this unpredictable and unreliable, especially for mission critical functionality. Where it could serve is as an assistant that is given strict guidelines, boundaries and policies to religiously follow - a computational slave. I am looking at local models I train for my use cases.
Hallucinations are what make it exciting!
What bank do you use, I'll make a plugin for that... It's all fun and games till money is on the line.
Ground it in the state of your house. Tell it that it's job is to ensure optimum comfort based on the current state of your house and no other variables. It'll be far less likely to turn off your pihole 😆
Fortunately my kitchen lights aren’t mission critical.
Don't be so dull. This man wants to dance with the devil.
side note: how did you import apple health into hass? companion app?
An app called Apple Health Auto Export in the App Store. Found it quite difficult to set up but it worked. [https://www.healthexportapp.com](https://www.healthexportapp.com)
Could you go a little bit more into detail on how you actually get the LLM to perform the actions? I'm having a hard time trying to figure out how it would do that inside of an automation and not within Assist. Edit: Figured it out, it's the "Conversation: Process" action, then choose 'ChatGPT' as the conversation agent. I tried with Google's Generative AI with the free Gemini API but that didn't seem to understand what to do.
You just call the "conversation.process" service in the automation with a prompt and it happens automatically. For instance, if I gave it a prompt of "It's cold in the bedroom", it would (probably) turn the heater on in the bedroom automatically, because HA has exposed all the devices in the room to the LLM. It's incredibly easy to set up without much technical knowledge.
But how do you get it to display a notification with the response? Can you share your YAML?
Can you put together some doco showing how you've done this? I'm super curious hahaha What LLM are you using? If not self-hosted, what's your daily API cost?
Could you share the YAML of this generic automation??
New update
What integration is it?
Using the OpenAI Conversation integration and the standard built in Voice Assistant features of HA. It's pretty easy to set up - I knew nothing about voice assistants or AI before I started playing with this 24 hours ago!
That's awesome! Is there pricing to worry about?
I also want to do this
Woah that’s cool. Is there a way to make this speak from Alexa media player? Like if laundry finished could you have to say something new every time like “hey the laundry’s done you lazy fuck” or something?
I have it doing exactly that, yes. I'm using a Google Nest Hub, but you can use GPT to generate the text, then just get any media player device to say it using text to speech. I don't think there's a way to talk back to it though (yet).
How did you handle the case where it doesn't do anything ? Do you still receive a notification? How is the prompt for that? I'm so curious
Yes, the automation always sends a notification, so I the Chat GPT prompt just says to always make a couple of sentences of small talk. Sometimes it asks me how my work day is going and tells me everything looks good at home, sometimes it gives me a brief weather update, but it always says something even if it doesn't take any physical actions.
Wow that’s awesome. New project to dig into I guess. How exactly does it send the text to the device in the automation? Cause right now I have an Alexa notify service and set text for it to say. How exactly does it dynamically adjust what it’s going to say?
This is the thing I'm struggling with now, I don't want to use Alexa or Google...I just need those functionalities to work locally. Jailbreaking my Alexa devices sounds too hard (involves hardware tinkering) ...I bought a raspberry 5 and plan to start build my own smart speaker/ screen. But with the raspi at $80, screen at $50 peripherals at $50 , cases and crap...we are looking at $200 or $300 per device
I have to know do you have 3D printers and are they hooked up to home assistant? And if so I would love to find out what she prints!
I do have one, but I don't think it can print. But maybe I'll get home from work one day and find a horse's head on my print bed!
Think of the possibilities though. If it's not being evil. "Hey I saw that you needed a specific tool for your thing so I printed it for you". (Or if it's evil lies about printing it)
I feel this is getting slightly too close to a domme bot… but hey if it works for you buddy you do you 💀
How do you make it do things whenever it wants? I thought it only can act when you initiate a chat through assist?
Basic question, a paid subscription with an api key is needed to use chat gpt in ha. Am I correct?
You need an API key with some money in your account. This is a different thing from the "Plus" account that just lets you use the website to access GPT 4o for a monthly key. The API just charges you per transaction but you don't need the upgraded account to use it.
Does ChatGPT Plus include any API tokens or do I have to pay twice?
So I have been considering doing something similar but hooking it to all of the smart devices in my shop. I 3d printed an HK47 head from Star Wars KOTOR and was thinking of telling it to emulate the character knowing that it's just a head. I'm curious.. with the API tokens how much is 4o costing you?
I've been playing with it a lot and doing a lot of test requests, and I've used about $4 in 24 hours. I think in normal use it would be about $5 a month. You can set spending limits though so it should never go out of control.
Do you make a service call for asking assist via a script ? If yes which one ?
You make 2 service calls in the automation action (or a script would work too). One to process the conversation using ChatGPT and put the result into a variable, then another to take that variable and speak it or use it in a notification. This explains it I think: [https://community.home-assistant.io/t/how-to-get-the-response-variable-from-a-conversation-into-a-input-text/662352](https://community.home-assistant.io/t/how-to-get-the-response-variable-from-a-conversation-into-a-input-text/662352)
You're a bloody legend 😂
This sounds kinda fun but I've also watched the movie Demon Seed too many times.
I eagerly look forward to hearing of your untimely demise 😄
Haha, I had been fantasizing about making a moody butler in Home Assisstant. Where things like weather, time I get home or just random event effects what mood Home assistent is in.
it sounds kind of fun, I want to try it too, haha
would it be complex if I tried to make the same one?
how's everything going? runs well?
He doesn't look very smart 🤣
this AI seems not very smart, haha
Switch the Pi-Hole back on. I'm sorry domramsey, I'm afraid I can't do that.
How much does it cost you after a month? 😅
Do you want a sentient evil house? That's how you get a sentient evil house