• 0 Posts
  • 7 Comments
Joined 7 months ago
cake
Cake day: June 3rd, 2024

help-circle

  • I scrolled past, saw the text, and the reward circuits in my brain went “wait… thats spacetime’s font!”.

    Crazy that this show has been going for 9 years. Video game tie-in’s are fun, but it’s just so much better now.

    Watching the “does the universe create itself” video, shortly after playing The Outer Wilds just… broke me.

    Literally mid-watch right now on the latest quantum gravity episode.

    Cannot recommend this series enough if you have any science/physics interest



  • Yeah I feel that 100%, ran a Google assistant for a little bit before just being creeped out by the privacy concerns and sick of it constantly trying to sell me things. Unfortunately I think that any service reliant on a 3rd party is ultimately going to be a huge privacy invasion, since they can’t turn a profit without vacuuming up your data.

    Of all the mainstream assistants, Apple seems to be the least bad in that regard, so you could consider picking up a homepod. But I would also say that for basic stuff, home assistant has been fairly painless to set up. The GUI is good enough now that no yaml coding is required unless you get into the more complex stuff, and I found the ootb functions to be “good enough” for what I wanted a voice assistant to do.


  • Home assistant has a built in voice assistant function that can be as simple or robust as you need it to be. The whole thing can be setup fully locally and mine runs easily on an old micro-pc I got for $100. I had it running on a Pi3b originally but the STT and TTS would take 10+ seconds to process, which was too long.

    Out of the box it controls local devices, does to-do lists, controls media, sets timers. Setting reminders doesn’t work out of the box, but can be setup with some great community templates. Services that require web content like “tell me the news” or “what’s the weather in Seattle” need to be either setup with custom commands that have access to the info you want, or need to go through an LLM.

    Luckily, the past few months have seen the open home foundation add integrations for LLM’s, both local and web-based (chatgpt, gemini, etc) are possible, so you can have it run queries through models run on a local GPU. Though this is currently fairly bleeding edge and I haven’t tried running a local LLM myself yet so I can’t speak to it’s complexity.

    More on that here: https://www.home-assistant.io/blog/2024/06/07/ai-agents-for-the-smart-home/