It would be great if we could point hedy at a local LLM that was downloaded from hugging face. Would be nice to be able to use a 70B parameter model, which is very doable with macs that have 128gb of ram or greater.