Connect with us

AI

DeepSeek’s new model cuts API costs in half

DeepSeek claims that for long-context tasks, its method can cut API costs by half.

Laptop displaying DeepSeek app homepage with options.
Image: KnowTechie

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

DeepSeek, the low-key Chinese AI lab that loves surprising the industry, just popped back into the spotlight with a brand-new experiment. 

On Monday, the company quietly dropped an experimental model called V3.2-exp on Hugging Face, along with an academic paper on GitHub

The pitch? A clever new way to slash the price of running big AI models when they’re dealing with really long conversations or documents.

The magic trick is something DeepSeek calls Sparse Attention, which is a neat bit of engineering. 

Instead of brute-forcing its way through every single word in a giant text window, the system first deploys a “lightning indexer” to cherry-pick the most important chunks. 

Then, a “fine-grained token selection system” zooms in even further to grab only the keywords or tokens. 

The result: the model pays attention where it matters and ignores the fluff, like an over-caffeinated editor skimming a 600-page novel for plot twists.

Why does this matter? Running AI models, what researchers call inference, is expensive. It’s not the training that kills your wallet; it’s the daily grind of serving billions of user queries. 

DeepSeek claims that for long-context tasks, its method can cut API costs by half. The model’s weights are open and free, so third-party tinkerers on Hugging Face can start poking holes in those numbers almost immediately.

This isn’t DeepSeek’s first rodeo. Earlier this year, the company grabbed headlines with R1, a reinforcement-learning model that promised a cheaper path to cutting-edge AI. 

R1 didn’t exactly spark the revolution some predicted, and DeepSeek slipped back into stealth mode until now.

V3.2-exp probably won’t break the internet the way ChatGPT did, but its thrifty new attention system could nudge the entire industry toward leaner, meaner AI. 

In a world where every extra token costs money, that’s a story worth paying attention to, even sparsely.

Does DeepSeek’s sparse attention method prove that innovative AI engineering can compete with massive computational budgets, or will this efficiency gain be quickly matched by well-funded competitors? Should the AI industry prioritize making models more cost-effective and accessible like DeepSeek’s approach, or does the race for raw performance justify the current expensive infrastructure arms race? Tell us below in the comments, or reach us via our Twitter or Facebook.

Follow us on Flipboard, Google News, or Apple News

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More in AI