Skip to content

Local AI Processing on Mac [Beta]

Local AI Processing runs Hedy’s AI analysis entirely on your Mac. Your transcripts stay on-device, and it works even when you’re offline.

Local AI Processing is a beta feature available on Macs with Apple Silicon (M1 or later). It replaces Hedy’s cloud-based AI analysis with a local language model that runs directly on your Mac.

What Local AI Processing Does

  • Generates session summaries, detailed notes, and chat responses on your Mac.

  • Keeps your transcripts on-device. No conversation data leaves your computer for AI analysis.

  • Works offline once the model is downloaded.

  • Coexists with cloud-based speech recognition (Deepgram, OpenAI) if you use those. Only the AI analysis step is local.

Requirements

  • A Mac with Apple Silicon (M1, M2, M3, M4, or later).

  • Free memory matched to the model you pick. Hedy shows a “Great fit”, “Tight fit”, or “Won’t fit” indicator for each model based on your Mac’s available RAM.

  • An initial model download. Model size ranges from roughly 2.5 GB to 30 GB.

How to Enable Local AI Processing

  1. Open Hedy and go to Settings → Speech & AI.

  2. Scroll to the Local AI section.

  3. Turn on Local AI Processing.

  4. Pick a model from the list that fits your Mac’s memory. Look for the “Great fit” label.

  5. Wait for the model to finish downloading. You’ll see progress and size in GB.

  6. Once downloaded, Local AI Processing is active. Start a session as usual.

Picking a Model

Hedy shows several models, sorted by size. Each has two key numbers:

  • Download size: how much disk space the model takes.

  • RAM required: how much memory the model needs to run.

Hedy automatically checks your free memory and flags each model:

  • Great fit: recommended. Plenty of headroom.

  • Tight fit: will work, but may be slow or unstable if you run many other apps.

  • Won’t fit: don’t pick this one.

If you’re unsure, start with the “Recommended” model. You can switch later.

Switching Between Local and Cloud AI

  • Turn Local AI Processing off to return to cloud-based analysis.

  • You don’t lose any sessions when switching. Existing sessions keep their current notes and summaries.

Privacy

When Local AI Processing is on:

  • Your session transcripts never leave your Mac for AI analysis.

  • Model downloads come from Hedy’s servers but don’t contain any of your data.

  • Speech recognition is a separate step. If you’re using a cloud provider like Deepgram or OpenAI for transcription, your audio still flows through that provider. To keep both steps local, pair Local AI Processing with local speech recognition, such as Parakeet or on-device Whisper. See our Speech Recognition Providers guide.

Troubleshooting

The model won’t download

  • Check your internet connection and available disk space.

  • Restart Hedy and try again.

  • Some models are several GB. Downloads can take a while on slower connections.

Responses are slow

  • Check the model’s RAM requirement versus your free memory. A “Tight fit” model competing with other apps can run slowly.

  • Close memory-heavy apps you aren’t using (extra browser tabs, background apps).

  • Try a smaller model.

AI features say “not available”

  • Confirm the model finished downloading (check the Local AI section in settings).

  • Toggle Local AI Processing off and on again.

Beta Notes

Local AI Processing is currently in beta. Expect:

  • Analysis quality will improve over time as we tune prompts and ship better models.

  • Speed depends heavily on your Mac’s hardware.

  • We’re actively collecting feedback. Email support@hedy.ai if you hit issues.

Related: Cloud AI Analysis Privacy Control, Speech Recognition Providers.