AI Startup SambaNova Rolls Out Latest Chip to Upgrade its Processor, Software

SambaNova Systems, a company specializing in AI hardware and software systems, has revealed its latest AI chip, the SN40, which is set to improve its SambaNova Suite, a comprehensive large language model (LLM) platform.

Introduced just months ago in March, the SambaNova Suite distinguishes itself by deploying processors and operating systems for AI inference training. It aims to provide a cost-effective alternative to power-hungry GPUs.

Surprisingly, SambaNova has swiftly upgraded its hardware, a significant leap in performance. The SN40L, the latest offering, increases the capacity to handle a whopping 5 trillion parameters in an LLM, supporting sequences of over 256,000 in length on a single system node, as reported by the company.

This remarkable performance boost is attributed to the SN40L’s 51 billion transistors per processing unit, with the total of 102 billion per package, a substantial increase compared to the previous SN30 model.

The SN40L incorporates 64 GB of HBM memory, a new addition to SambaNova’s product line, which offers over three times the memory bandwidth for faster data processing. It also features 768 GB of DDR5 memory per processing unit, totalling 1.5 TB, compared to the SN30’s 512 GB (1.0 TB).

SambaNova’s processor sets itself apart from Nvidia’s GPU by providing a reconfigurable dataflow unit (RDU)-based environment.

On the software front, SambaNova introduces a turnkey solution for generative AI. Their full AI stack comprises pre-trained open-source models like the Meta Llama2 LLM model, which organizations can customize with their own content to create internal LLMs. The package also includes SambaFlow software, designed to automatically analyze and optimize processing based on specific task requirements.

Dan Olds, Chief Research Officer at Intersect360 Research, regards this upgrade as a substantial advancement in both hardware and software. Notably, the SN40’s 5 trillion parameter capability is nearly three times larger than the estimated 1.7 trillion parameters of GPT-4.

“The larger memory, plus the addition of HBM, are key factors in driving the performance of this new processor. With larger memory spaces, customers can get more of their models into main memory, which means much faster processing. Adding HBM to the architecture allows the system to move data between main memory and the cache-like HBM in much larger chunks, which also speeds processing,” said Olds.

Ritika Tyagi

Ritika Tyagi pursuing journalism & mass communication a passionaite storyteller and final year student. She has been writing blogs, news articles, emailers  for digital platforms and has contributed with her writing to various organisations like- Doordarshan, Exchange4media, Aajtak, The Patriot.

Recent Posts

ManageEngine Launches Innovative SaaS Management Solution to Combat SaaS Sprawl Challenges

New Delhi: ManageEngine, a part of Zoho Corporation and a top provider of IT management…

1 week ago

ManageEngine Launches SaaS Management Solution to Manage SaaS Proliferation

New Delhi: ManageEngine, a division of Zoho Corporation, has introduced SaaS Manager Plus, a robust…

2 weeks ago

Expansion Opportunities Abound for Small Businesses with Government Grants

New Delhi: Small businesses across India are gearing up for new expansion opportunities thanks to…

2 weeks ago

Second Front Systems Teams Up with Microsoft for Seamless SaaS Deployment

New Delhi: Second Front Systems today announced its collaboration with Microsoft on Game Warden, which…

3 weeks ago

Black Mountain Can Now Offer More Services to Local Governments as it Acquires Fiscalsoft

New Delhi: Black Mountain Software, which is a Polson based billing and accounting software for…

3 weeks ago

Siemens Unveils Cloud-Based Cybersecurity Software for Industrial Operators

New Delhi: Siemens has introduced SINEC Security Guard, an advanced cybersecurity software-as-a-service which is tailored…

4 weeks ago