Deepseek & AI Investments
Last Monday, DeepSeek, a Chinese AI research laboratory spun out of a quant trading firm, released a preview of its latest language model, DeepSeek-R1. The model demonstrates reasoning capabilities comparable to leading commercial models, but with a crucial distinction: it's open source.
While you can chat with it online at chat.deepseek.com, it's available to download and run locally.
The model's development is particularly noteworthy given recent U.S. export restrictions on high-performance GPUs to China. According to The Economist, DeepSeek achieved its capabilities while using approximately one-tenth of the computational resources required for Meta's Llama 3.1, demonstrating remarkable efficiency in model training.
At the time of writing, tech stocks fuelled by the AI boom have retreated around 11% over concerns about whether the current forecast for high-performance compute & networking equipment will be sustained.
Running DeepSeek on a Mac
This was a lot easier than I expected, even on a borderline antique iMac from 2017.
- Download Ollama then execute
ollama run deepseek-r1
in a terminal. After a 4GB download, the model will be ready to chat with in a Terminal. - For a more polished experience on the Desktop; download the free Chatbox AI, select Ollama as the Model Provider and then DeepSeek-r1 as the running model.

Performance
Once configured, you're now ready to kick the tyres. I've adapted the common Strawberry test to ask "How many Rs in Berryman?" and, after a bit of thinking, it announced "The letter 'R' appears **twice** in the name "Berryman." 👏
As this is a reasoning model, it also shows the thought process behind the response. I've included the lengthy, but logical, assessment of how it responded to this tricky question.
Alright, so I need to figure out how many times the letter 'R' appears in
the name "Berryman." Okay, let's break this down step by step.
First, I should probably write out the name and look at each letter one by
one. The name is "Berryman," right? So let me spell it out: B, E, R, R, Y,
M, A, N.
Now, I need to count how many times the letter 'R' appears in this
sequence. Let's go through each position:
1. **B** - Not an 'R'.
2. **E** - Not an 'R'.
3. **R** - That's one 'R'.
4. **R** - That's two 'R's.
5. **Y** - Not an 'R'.
6. **M** - Not an 'R'.
7. **A** - Not an 'R'.
8. **N** - Not an 'R'.
So, after going through each letter in "Berryman," it looks like the
letter 'R' appears twice.
Wait a minute—did I miss any? Let me double-check to make sure I didn't
overlook anything:
- B: No
- E: No
- R: Yes (1)
- R: Yes (2)
- Y: No
- M: No
- A: No
- N: No
Nope, it does seem like 'R' only appears twice in "Berryman." I think
that's the correct answer.
Conclusion
From a few days' experimentation, this model doesn't appear as capable for my use cases as Claude 3.5 Sonnet (which is currently my provider of choice) but, given the creativity required to generate something this capable, in such a constrained environment, the team behind DeepSeek are to be applauded.