
Ajvideo
Add a review FollowOverview
-
Founded Date 27.11.1963
-
Sectors Construction / Facilities
-
Posted Jobs 0
-
Viewed 5
Company Description
Cerebras Ends up being the World’s Fastest Host for DeepSeek R1, Outpacing Nvidia GPUs By 57x
Join our daily and weekly newsletters for the latest updates and unique content on industry-leading AI coverage. Learn More
Cerebras Systems announced today it will host DeepSeek’s development R1 artificial intelligence design on U.S. servers, appealing accelerate to 57 times faster than GPU-based services while keeping sensitive data within American borders. The relocation comes amid growing issues about China’s quick AI advancement and information personal privacy.
The AI chip startup will release a 70-billion-parameter variation of DeepSeek-R1 working on its proprietary wafer-scale hardware, providing 1,600 tokens per 2nd — a remarkable improvement over standard GPU applications that have dealt with newer «thinking» AI designs.
Why DeepSeek’s reasoning designs are improving business AI
» These reasoning designs affect the economy,» stated James Wang, a senior executive at Cerebras, in a special interview with VentureBeat. «Any understanding worker basically needs to do some kind of multi-step cognitive jobs. And these reasoning models will be the tools that enter their workflow.»
The statement follows a tumultuous week in which DeepSeek’s development triggered Nvidia’s largest-ever market worth loss, almost $600 billion, raising questions about the chip giant’s AI supremacy. Cerebras’ option directly addresses 2 crucial issues that have emerged: the computational needs of advanced AI designs, and information sovereignty.
» If you utilize DeepSeek’s API, which is very popular today, that data gets sent out directly to China,» Wang discussed. «That is one serious caveat that [makes] lots of U.S. companies and enterprises … not happy to think about [it]»
How Cerebras’ wafer-scale innovation beats standard GPUs at AI speed
Cerebras attains its speed benefit through a novel chip architecture that keeps whole AI models on a single wafer-sized processor, getting rid of the memory traffic jams that plague GPU-based systems. The company claims its implementation of DeepSeek-R1 matches or goes beyond the efficiency of OpenAI’s proprietary models, while running completely on U.S. soil.
The advancement represents a substantial shift in the AI landscape. DeepSeek, founded by previous hedge fund executive Liang Wenfeng, stunned the industry by attaining sophisticated AI thinking abilities apparently at just 1% of the expense of U.S. competitors. Cerebras’ hosting solution now offers American companies a method to take advantage of these advances while keeping information control.
» It’s in fact a good story that the U.S. research labs provided this present to the world. The Chinese took it and enhanced it, but it has constraints because it runs in China, has some censorship issues, and now we’re taking it back and running it on U.S. data centers, without censorship, without data retention,» Wang stated.
U.S. tech leadership deals with new questions as AI innovation goes worldwide
The service will be available through a developer preview beginning today. While it will be initially complimentary, Cerebras plans to execute API access controls due to strong early need.
The relocation comes as U.S. legislators grapple with the implications of DeepSeek’s increase, which has actually exposed possible constraints in American trade limitations designed to preserve technological advantages over China. The capability of Chinese business to attain development AI abilities in spite of chip export controls has actually triggered require new regulative approaches.
Industry analysts recommend this development might speed up the shift away from GPU-dependent AI facilities. «Nvidia is no longer the leader in reasoning performance,» Wang kept in mind, pointing to benchmarks revealing exceptional efficiency from various specialized AI chips. «These other AI chip companies are really faster than GPUs for running these most current models.»
The effect extends beyond technical metrics. As AI designs significantly integrate sophisticated thinking capabilities, their computational needs have actually skyrocketed. Cerebras argues its architecture is much better suited for these workloads, potentially improving the competitive landscape in business AI release.
If you wish to impress your manager, VB Daily has you covered. We give you the within scoop on what business are making with generative AI, from regulatory shifts to practical implementations, so you can share insights for optimum ROI.
Read our Privacy Policy
An error occured.
The AI Impact Tour Dates
Join leaders in enterprise AI for networking, insights, and engaging discussions at the upcoming stops of our AI Impact Tour. See if we’re coming to your location!
— VentureBeat Homepage
— Follow us on Facebook
— Follow us on X.
— Follow us on LinkedIn.
— Follow us on RSS
— Press Releases.
— Contact Us.
— Advertise.
— Share a News Tip.
— Add to DataDecisionMakers
— Privacy Policy.
— Terms of Service.
— Do Not Sell My Personal Information
© 2025 VentureBeat. All rights reserved.
AI Weekly
Your weekly appearance at how applied AI is altering the tech world
We respect your privacy. Your email will just be utilized for sending our newsletter. You can unsubscribe at any time. Read our Privacy Policy.
Thanks for subscribing. Have a look at more VB newsletters here.