app.loshadki

Hey DeepSeek, can you reduce junk in my inbox?

    • mail
    • email
    • deepseek
    • macapps
    • openai
  • modified:
  • reading: 3 minutes

TL;DR: I built a macOS Mail.app extension that can use DeepSeek R1 model (or any other LLM) to filter emails and mark them as junk. It is a freemium application, that you can download here, currently offer 100% discount for the first 1000 users.

In the last year or so, I have noticed a significant increase in the number of cold sales emails I receive. Officially, they are not spam, but they are annoying. My guess the significant number of those emails I receive is because of co-founding a few LLCs in the last few years. I am not sure how they get my email address, but I am sure that I don’t want to see them in my inbox. All of those “hire engineers”, “try our product”, “use our payroll service” and so on.

I am sure there are some SaS services that help to send those emails. Built with AI. So I started to think of the idea—what if I use AI to reduce the junk in my inbox? Especially with the DeepSeek R1 model, that people talk about a lot everywhere.

Quick research showed me that I can build a Mail.app extension that can filter emails and mark them as junk. It took me a few days to build a prototype, and I am pretty happy where I am with it. And I am happy to share it with you.

Setup DeekSeek R1 locally

You might decide to be ok to share your emails (or at least Headers) with OpenAI API (or similar), but some of you might want to run the model locally. I am in the second group as well.

I use LM Studio to run large models on my Mac. You can install it using brew

brew install --cask lm-studio

After that download a Model, I used DeepSeek-R1-Distill-Llama-8B-GGUF

Download Model

In the Developer Menu (second from the top on the left) load just downloaded model, in the tab Load on the right you can change the Content Length to higher number of tokens. Depends on the amount of memory you have. I doubled it. And make sure to start the Server with Toggle Status in the top left corner.

Load Model

Configure SmartInbox

Download SmartInbox, unzip it, and move to the Applications folder. Open it, and you will see the configuration dialog.

The app has a free unlimited trial version, that just does not allow you to override the Prompt, or send the full email to the model. Currently, you can get 100% discount for the app license using the code MAILAPPUSER, that will work for the first 1000 purchases, and until the end of the February 2025. To get that license, go to the Loshadki Store.

In the configuration dialog, we will point to our local API:

  • URL: http://localhost:1234/v1/chat/completions
  • Model: deepseek-r1-distill-llama-8b

SmartInbox

Now open Mail.app, and enable SmartInbox extension in the Settings.

Mail.app Extensions

Time to test it

First time SmartInbox will try to connect to the local API, it will prompt to allow making local connections. You need to allow it.

Now you can test it. Try to send yourself an email, or just wait for the first email in your Inbox. You can open SmartInbox application to check the History of the emails processed.

SmartInbox

Pretty neat! I am happy with the results. Going to work on Prompt, but happy that

A few more notes

  • You can run Local LLM Service headless (without running LM Studio UI), just check the settings for “Local LLM Service (headless)”.
  • You can try to use other models, or OpenAI API, or similar API compatible with OpenAI API.
  • It takes about 10 seconds to analyze the email.

Download SmartInbox and give it a try. I am happy to hear your feedback. Feel free to email me at support@loshadki.app.

Thanks, Denis