關於 cookie 的說明

本網站使用瀏覽器紀錄 (Cookies) 來提供您最好的使用體驗,我們使用的 Cookie 也包括了第三方 Cookie。相關資訊請訪問我們的隱私權與 Cookie 政策。如果您選擇繼續瀏覽或關閉這個提示,便表示您已接受我們的網站使用條款。

搜尋結果Search Result

符合「Supermicro」新聞搜尋結果, 共 196 篇 ,以下為 73 - 96 篇 訂閱此列表,掌握最新動態
Supermicro Extends AI and GPU Rack Scale Solutions with Support for AMD Instinct MI300 Series Accelerators

New 8-GPU Systems Powered by AMD Instinct™ MI300X Accelerators Are Now Available with Breakthrough AI and HPC Performance for Large Scale AI Training and LLM Deployments SAN JOSE, Calif., Dec. 7, 2023 /PRNewswire/ -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Manufacturer for AI, Cloud, Storage, and 5G/Edge, is announcing three new additions to its AMD-based H13 generation of GPU Servers, optimized to deliver leading-edge performance and efficiency, powered by the new AMD Instinct MI300 Series accelerators. Supermicro's powerful rack scale solutions with 8-GPU servers with the AMD Instinct MI300X OAM configuration are ideal for large model training. Supermicro Extends AI and GPU Rack Scale Solutions with Support for AMD Instinct MI300 Series Accelerators The new 2U liquid-cooled and 4U air-cooled servers with the AMD Instinct MI300A Accelerated Processing Units (APUs) accelerators are available and improve data center efficiencies and power the fast-growing complex demands in AI, LLM, and HPC. The new systems contain quad APUs for scalable applications. Supermicro can deliver complete liquid-cooled racks for large-scale environments with up to 1,728 TFlops of FP64 performance per rack. Supermicro worldwide manufacturing facilities streamline the delivery of these new servers for AI and HPC convergence. "We are very excited to expand our rack scale Total IT Solutions for AI training with the latest generation of AMD Instinct accelerators, with up to 3.4X the performance improvement compared to previous generations," said Charles Liang, president and CEO of Supermicro. "With our ability to deliver 4,000 liquid-cooled racks per month from our worldwide manufacturing facilities, we can deliver the newest H13 GPU solutions with either the AMD's Instinct MI300X accelerator or the AMD Instinct MI300A APU. Our proven architecture allows 1:1 400G networking dedicated for each GPU designed for large-scale AI and supercomputing clusters capable of fully integrated liquid cooling solutions, giving customers a competitive advantage for performance and superior efficiency with ease of deployment." Learn more about Supermicro Servers with AMD Accelerators The LLM optimized AS –8125GS-TNMR2 system is built on Supermicro's building block architecture, a proven design for high-performance AI systems with air and liquid cooled rack scale designs. The balanced system design associates a GPU with a 1:1 networking to provide a large pool of high bandwidth memory across nodes and racks to fit today's largest language models with up to trillions of parameters, maximizing parallel computing and minimizing the training time and inference latency. The 8U system with the MI300X OAM accelerator offers the raw acceleration power of 8-GPU with the AMD Infinity Fabric™ Links, enabling up to 896GB/s of peak theoretical P2P I/O bandwidth on the open standard platform with industry-leading 1.5TB HBM3 GPU memory in a single system, as well as native sparse matrix support, designed to save power, lower compute cycles and reduce memory use for AI workloads. Each server features dual socket AMD EPYC™ 9004 series processors with up to 256 cores. At rack scale, over 1000 CPU cores, 24TB of DDR5 memory, 6.144TB of HBM3 memory, and 9728 Compute Units are available for the most challenging AI environments. Using the OCP Accelerator Module (OAM), with which Supermicro has significant experience in 8U configurations, brings a fully configured server to market faster than a custom design, reducing costs and time to delivery. Supermicro is also introducing a density optimized 2U liquid-cooled server, the AS –2145GH-TNMR, and a 4U air-cooled server, the AS –4145GH-TNMR, each with 4 AMD Instinct™ MI300A accelerators. The new servers are designed for HPC and AI applications, requiring extremely fast CPU to GPU communication. The APU eliminates redundant memory copies by combining the highest-performing AMD CPU, GPU, and HBM3 memory on a single chip. Each server contains leadership x86 "Zen4" CPU cores for application scale-up. Also, each server includes 512GB of HBM3 memory. In a full rack (48U) solution consisting of 21 2U systems, over 10TB of HBM3 memory is available, as well as 19,152 Compute Units. The HBM3 to CPU memory bandwidth is 5.3 TB/second. Both systems feature dual AIOMs with 400G Ethernet support and expanded networking options designed to improve space, scalability, and efficiency for high-performance computing. The 2U direct-to-chip liquid-cooled system delivers excellent TCO with over a 35% energy consumption savings based on 21 2U system rack solutions that produce 61,780 watts per rack over 95,256 watts air-cooled rack, as well as a 70% reduction in the number of fans compared to an air cooled system. "AMD Instinct MI300 Series accelerators deliver leadership performance, both for longstanding accelerated high performance computing applications and for the rapidly growing demand for generative AI," said Forrest Norrod, executive vice president and general manager, Data Center Solutions Business Group, AMD. "We continue to work closely with Supermicro to bring to market leading-edge AI and HPC total solutions based on MI300 Series accelerators and leveraging Supermicro's expertise in system and data center design." Learn more from Supermicro and AMD experts. View this webinar live or on demand. For more information, please visit: Supermicro AMD Accelerator Site AS -8125GS-TNMR2 (8U w/ MI300x) AS -2145GH-TNMR (2U LC w/ MI300A)  AS -4145GH-TNMR (4U AC w/ MI300A)  About Super Micro Computer, Inc. Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions manufacturer with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enable our development and production, enabling next generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Asia, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).  Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc. AMD, the AMD Arrow logo, AMD Instinct, EPYC and combinations thereof are trademarks of Advanced Micro Devices. All other brands, names, and trademarks are the property of their respective owners.  

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 2526 加入收藏 :
Supermicro擴大全球製造基地,提升機櫃級製造力達全球每月 5,000 個經完整測試的人工智慧、高速運算和液冷伺服器解決方案

Supermicro, Inc.(納斯達克股票代碼:SMCI)為AI、雲端、儲存和 5G/邊緣領域的全方位 IT 解決方案製造商, 正在擴展其人工智慧和高速運算供貨量和先進的液冷伺服器解決方案。Supermicro在全球的完整機櫃級供貨力,正藉由位於美國、台灣、荷蘭和馬來西亞的數個最先進的整合製造廠擴大。Supermicro正在積極考慮未來製造的擴展和地點,以滿足客戶對Supermicro人工智慧和高速運算機櫃級解決方案產品組合不斷增長的需求。   Supermicro總裁暨執行長梁見後表示:「憑藉全球布局藍圖,我們現在能每月提供5,000 個機架,以支援大量要求完全整合的液冷機架,且每機架最高達100千瓦的訂單。我們預估,隨著CPU和GPU的運行溫度持續升高,最多將有20%的新資料中心採用液冷解決方案。我們領先的機櫃級解決方案,由於全球不斷增長的人工智慧資料中心發展,而有極高的需求。在設計和執行過程的初期,應考慮完整的機櫃級和液冷伺服器解決方案,這將有助縮短供貨時間,以滿足對人工智慧和超大規模資料中心的迫切需求。」   Supermicro維持著高度「Golden SKU」庫存,以滿足全球部署的快速供貨需求。運行最新生成式人工智慧應用的大型雲端服務供應商和企業資料中心,將可迅速受益於全球供貨時間的縮減。Supermicro廣泛的伺服器範疇,從資料中心到邊緣(物聯網)皆可完美整合,從而提升採用率並吸引更多客戶。   隨著近期發布的MGX系列產品,搭載NVIDIA GH200 Grace™ Hopper™超級晶片和NVIDIA Grace™ CPU超級晶片,Supermicro將持續擴展針對人工智慧的最佳化伺服器。結合現有的產品線,包括針對大型語言模型最佳化的NVIDIA HGX 8-GPU解決方案,以及NVIDIA L40S和L4的產品,再加上Intel Data Center MAX GPUs、Intel® Gaudi®2,以及AMD Instinct™ MI系列 GPU,Supermicro能應對所有範疇的人工智慧訓練和推論應用。Supermicro的All-Flash儲存伺服器搭載NVMe E1.S和E3.S儲存系統,加速了各種人工智慧訓練應用的資料存取,實現更快速的運行。高速運算應用方面,搭載GPU的Supermicro SuperBlade可以降低高階模擬應用的運行時間並同時減少能耗。   與現有產業平均水準相比,當液冷技術整合於資料中心時,可將資料中心的電源使用效率(PUE)降低最高達 50%。在運行生成式人工智慧或高速運算模擬應用時,減少能源足跡和藉此產生較低的電源使用效率,可顯著降低資料中心的營運支出。   透過Supermicro提供的機櫃級整合和部署服務,客戶在考量其獨特商業目標的同時,可以從經過驗證的參考設計來實現快速安裝,並能在之後與通過Supermicro認證的專家合作,共同設計針對特定工作負載的最佳化解決方案。在伺服器交付後,只需將機架連接電源、網路和液冷基礎設施,就可實現隨插即用解決方案。Supermicro致力於提供完整的資料中心 IT 解決方案,包括現場交付、部署、整合和基準測試,以實現最佳的營運效率。   了解更多關於Supermicro的機櫃級設計: https://www.supermicro.com/en/solutions/rack-integration   了解更多關於Supermicro的液冷解決方案: https://www.supermicro.com/en/solutions/liquid-cooling   造訪Supermicro在美國科羅拉多州丹佛的2023年超級運算大會(Supercomputing 2023)的展位。更多資訊:https://learn-more.supermicro.com/sc23

文章來源 : 香港商霍夫曼公關顧問股份有限公司 發表時間 : 瀏覽次數 : 13568 加入收藏 :
Supermicro擴大全球製造基地,提升機櫃級製造力達全球每月 5,000 個經完整測試的人工智慧、高速運算和液冷伺服器解決方案

透過在美國、亞洲、荷蘭擴增機櫃級伺服器的全球製造力,有助縮短最新每機架最高達 100千瓦的人工智慧和高速運算技術的供貨時間  加州聖荷西和丹佛2023年11月15日 /美通社/ -- Supermicro, Inc.(納斯達克股票代碼:SMCI)為AI、雲端、儲存和 5G/邊緣領域的全方位 IT 解決方案製造商, 正在擴展其人工智慧和高速運算供貨量和先進的液冷伺服器解決方案。Supermicro在全球的完整機櫃級供貨力,正藉由位於美國、亞洲、荷蘭的數個最先進的整合製造廠擴大。Supermicro正在積極考慮未來製造的擴展和地點,以滿足客戶對Supermicro人工智慧和高速運算機櫃級解決方案產品組合不斷增長的需求。 Supermicro Rack Scale Solutions Supermicro總裁暨執行長梁見後表示:「憑藉全球布局藍圖,我們現在能每月提供5,000 個機架,以支援大量要求完全整合的液冷機架,且每機架最高達100千瓦的訂單。我們預估,隨著CPU和GPU的運行溫度持續升高,最多將有20%的新資料中心採用液冷解決方案。我們領先的機櫃級解決方案,由於全球不斷增長的人工智慧資料中心發展,而有極高的需求。在設計和執行過程的初期,應考慮完整的機櫃級和液冷伺服器解決方案,這將有助縮短供貨時間,以滿足對人工智慧和超大規模資料中心的迫切需求。」 Supermicro維持著高度「Golden SKU」庫存,以滿足全球部署的快速供貨需求。運行最新生成式人工智慧應用的大型雲端服務供應商和企業資料中心,將可迅速受益於全球供貨時間的縮減。Supermicro廣泛的伺服器範疇,從資料中心到邊緣(物聯網)皆可完美整合,從而提升採用率並吸引更多客戶。 隨著近期發布的MGX系列產品,搭載NVIDIA GH200 Grace™ Hopper™超級晶片和NVIDIA Grace™ CPU超級晶片,Supermicro將持續擴展針對人工智慧的最佳化伺服器。結合現有的產品線,包括針對大型語言模型最佳化的NVIDIA HGX 8-GPU解決方案,以及NVIDIA L40S和L4的產品,再加上Intel Data Center MAX GPUs、Intel® Gaudi®2,以及AMD Instinct™ MI系列 GPU,Supermicro能應對所有範疇的人工智慧訓練和推論應用。Supermicro的All-Flash儲存伺服器搭載NVMe E1.S和E3.S儲存系統,加速了各種人工智慧訓練應用的資料存取,實現更快速的運行。高速運算應用方面,搭載GPU的Supermicro SuperBlade可以降低高階模擬應用的運行時間並同時減少能耗。 與現有產業平均水準相比,當液冷技術整合於資料中心時,可將資料中心的電源使用效率(PUE)降低最高達 50%。在運行生成式人工智慧或高速運算模擬應用時,減少能源足跡和藉此產生較低的電源使用效率,可顯著降低資料中心的營運支出。 透過Supermicro提供的機櫃級整合和部署服務,客戶在考量其獨特商業目標的同時,可以從經過驗證的參考設計來實現快速安裝,並能在之後與通過Supermicro認證的專家合作,共同設計針對特定工作負載的最佳化解決方案。在伺服器交付後,只需將機架連接電源、網路和液冷基礎設施,就可實現隨插即用解決方案。Supermicro致力於提供完整的資料中心 IT 解決方案,包括現場交付、部署、整合和基準測試,以實現最佳的營運效率。 了解更多關於Supermicro的機櫃級設計:https://www.supermicro.com/en/solutions/rack-integration 了解更多關於Supermicro的液冷解決方案:https://www.supermicro.com/en/solutions/liquid-cooling 造訪Supermicro在美國科羅拉多州丹佛的2023年超級運算大會(Supercomputing 2023)的展位。更多資訊:https://learn-more.supermicro.com/sc23 Supermicro廣泛的伺服器產品組合包含: SuperBlade® - Supermicro的高性能、密度最佳化和節能的多節點平台,專為人工智慧、資料分析、高速運算、雲端和企業工作負載進行最佳化。 支援PCIe GPU 的GPU伺服器 -支援先進加速器的系統,能顯著提升性能和節省成本。專為高速運算、人工智慧/機器學習、渲染和虛擬桌面基礎架構(Virtual Desktop Infrastructure,VDI)工作負載而設計。 通用型GPU伺服器 - 開放、模組化、基於標準的伺服器,透過GPU提供卓越的性能和適用性。支援的GPU規格選項包括最新的PCIe、OAM 和 NVIDIA SXM技術。 Petascale儲存 - 具業界領先的儲存密度和性能,搭載EDSFF E1.S和E3.S驅動器,能實現在單一1U或2U機箱中的卓越容量和性能。 Hyper - 旗艦性能機架伺服器,為應對最具挑戰性的工作負載而生,同時提供能滿足各式各樣客製應用需求的儲存和I/O彈性。 Hyper-E - 具備旗艦Hyper系列的強大性能和靈活性,針對邊緣環境的最佳化部署而設計。適合邊緣應用的特性,包括短深度機箱和前置I/O,使Hyper-E適用於邊緣資料中心和電信機櫃。 BigTwin® - 2U 2節點或2U 4節點平台,每節點配備雙處理器,並具有無需工具即可熱插拔的設計,提供卓越的密度、性能和適用性。這些系統非常適合用於雲端、儲存和媒體工作負載。 GrandTwin™ - 專為單處理器性能和記憶體密度而設計,具有前端(冷通道)熱插拔節點以及前置或後置I/O,更易於維護。 FatTwin® - 先進、高密度的多節點4U雙機架構,具有8或4個為資料中心運算或儲存密度進行最佳化的單處理器節點。 邊緣伺服器 - 高密度處理能力、採用緊湊機型,其最佳化機型設計適用於電信機櫃和邊緣資料中心的設置。可選擇配置直流電源,以及較高的運行溫度,最高可達攝氏55度(華氏131度)。 CloudDC - 適用於雲端資料中心的一體化平台,具有彈性的I/O和儲存配置,以及雙AIOM插槽(支援PCIe 5.0;符合OCP 3.0標準),實現最大資料處理量。 WIO - 提供廣泛的I/O搭配選項,以提供真正針對特定企業需求進行最佳化的系統。 Mainstream - 高性價比的雙處理器平台,適用於企業日常工作負載。 企業儲存 - 適用於大規模的儲存工作負載,利用 3.5 英吋的轉動媒介,實現高密度和卓越的總體擁有成本(TCO)。前載和前/後載規格配置能使驅動器能輕鬆連結,而無需工具的支架則可簡化維護。 工作站 - Supermicro工作站以可攜式、可置於桌下的機型提供資料中心級的性能,非常適用於辦公室、研究實驗室和駐外辦公室的人工智慧、3D設計及媒體和娛樂工作負載 關於Super Micro Computer, Inc. Supermicro(納斯達克股票代碼:SMCI)為應用最佳化全方位IT解決方案的全球領導者。Supermicro的成立據點及營運中心位於美國加州聖荷西,致力為企業、雲端、AI和5G電信/邊緣IT基礎架構提供領先市場的創新技術。我們是全方位IT解決方案製造商,提供伺服器、AI、儲存、物聯網、交換器系統、軟體及支援服務。Supermicro的主機板、電源和機箱設計專業進一步動了我們的發展與產品生產,為全球客戶實現了從雲端到邊緣的下一代創新。我們的產品皆由企業內部團隊設計及製造(在美國、亞洲及荷蘭),透過全球化營運提供規模生產及展現絕佳效率,透過最佳化設計,不但降低總體擁有成本(TCO),還能透過先進的綠色運算技術來減少對環境的衝擊。屢獲殊榮的Server Building Block Solutions®產品組合,讓客戶可以自由選擇這些具高度彈性、可重複使用且極為多元的建構式組合系統,我們支援各種外形尺寸、處理器、記憶體、GPU、儲存、網路、電源和散熱解決方案(空調、自然氣冷或液冷),因此能為客戶的工作負載與應用提供最佳的效能。 Supermicro、Server Building Block Solutions和We Keep IT Green皆為Super Micro Computer, Inc.的商標和/或註冊商標。 所有其他品牌、名稱和商標皆為其各自所有者之財產。  

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 3160 加入收藏 :
Supermicro 利用即將推出的搭載 HBM3e 記憶體的 NVIDIA HGX H200 和 MGX Grace Hopper 平台擴闊人工智能解決方案

Supermicro 擴闊 8-GPU、4-GPU 和 MGX 產品線來支援 NVIDIA HGX H200 和 Grace Hopper 超級晶片,為大型語言模型應用程式提供更快更大的 HBM3e 記憶體——搭載 NVIDIA HGX 8-GPU 的 Supermicro 新型創新 4U 液冷伺服器使每機架的計算密度提高了一倍,功率高達每機架 80 千瓦,降低了總體擁有成本 (TCO) 加州聖荷西和丹佛 2023年11月14日 /美通社/ -- Supercomputing Conference(簡稱 SC23)-- Supermicro,Inc.(納斯達克股票代碼:SMCI)是一間面向人工智能、雲端計算、儲存和 5G/邊緣的全面 IT 解决方案供應商,即將推出配備 H200 Tensor Core GPU 的全新 NVIDIA HGX H200 產品線,從而擴闊其在人工智能領域的影響力。Supermicro 領先業界的人工智能平台(包括 8U 和 4U 通用型 GPU 系統)均可直接插入 HGX H200 8GPU 和 HGX H200 4GPU,相較於 NVIDIA H100 Tensor Core GPU,HGX H200 的 HBM3e 記憶體容量和頻寬分別提高了近 2 倍和 1.4 倍。此外,Supermicro NVIDIA MGXTM 系統的廣泛產品組合還支援即將推出的搭載 HBM3e 記憶體的 NVIDIA Grace Hopper 超級晶片。憑藉前所未有的效能、可擴展性和可靠性,Supermicro 的機架式人工智能解決方案可加速計算密集型生成式人工智能、大型語言模型 (LLM) 訓練和 HPC 應用程式的效能,同時滿足不斷增長的模型規模發展需求。Supermicro 利用積木式架構,可以快速將新技術推向市場,使客戶能夠盡早提高生產力。 03_2023__4U_8GPU_H200_1080 Supermicro 還推出了業界密度最高的伺服器,在 4U 液冷系統中搭載 NVIDIA HGX H100 8-GPU 系統,採用最新的 Supermicro 液冷解決方案。這款業界最精巧的高效能 GPU 伺服器,使數據中心營運商能夠减少佔地面積和能源成本,同時在單個機架中提供最高效能的人工智能訓練能力。憑藉密度最高的 GPU 系統,企業可以利用尖端的液冷解決方案降低總體擁有成本。 Supermicro 總裁兼行政總裁 Charles Liang 表示:「Supermicro 與 NVIDIA 合作,為人工智能訓練和 HPC 應用程式設計出最先進的系統。我們的積木式架構使我們能夠率先將最新技術推向市場,讓客戶以前所未有的速度部署生成式人工智能。憑藉我們遍佈全球的生產設施,我們可以更快地向客戶交付這些新系統。新系統採用了 NVIDIA H200 GPU,配備 NVIDIA ® NVLink ™ 和 NVSwitch ™ 高速 GPU 互連(速度為 900GB/s),在我們的機架規模人工智能解決方案中,每個節點可提供高達 1.1TB 的高頻寬 HBM3e 記憶體,為當今的大型語言模型和生成式人工智能提供最高效能的模型並行。我們也很高興能提供世界上最精巧的 NVIDIA HGX 8-GPU 液冷伺服器,它將我們的機架規模人工智能解決方案的密度提高了一倍,同時降低了能源成本,為當今的加速數據中心實現綠色計算。」 進一步了解配備 NVIDIA GPU 的 Supermicro 伺服器 Supermicro 設計並製造了多種不同外形尺寸的人工智能伺服器產品組合。受歡迎的 8U 和 4U 通用型 GPU 系統採用了四路和八路的 NVIDIA HGX H100 GPU ,現在可以直接插入全新 H200 GPU,在更短的時間內訓練更大的語言模型。每個 NVIDIA H200 GPU 包含 141GB 記憶體,頻寬為 4.8 TB/s。 NVIDIA 高效能運算、人工智能及量子計算數據中心產品解決方案總監 Dion Harris 表示:「Supermicro 即將推出的伺服器設計採用了 NVIDIA HGX H200 ,這將有助加速生成式人工智能和高效能運算的工作負載,從而使企業和組織能夠最大限度地利用其人工智能基礎設施。配備高速 HBM3e 記憶體的 NVIDIA H200 GPU 能夠為各種工作負載處理海量數據。 此外,最近推出的搭載 NVIDIA GH200 Grace Hopper 超級晶片的 Supermicro MGX 伺服器也採用了搭載 HBM3e 記憶體的 NVIDIA H200 GPU。 全新 NVIDIA GPU 能夠加速當今和未來的大型語言模型 (LLM),使其在更緊湊、更高效的集群中容納上千億的參數,從而以更短的時間訓練生成式人工智能,同時允許在一個系統中容納多個更大的模型,進行實時 LLM 推理,為數百萬用戶提供生成式人工智能服務。 在 SC23 大會上,Supermicro 展示了最新款的 4U 通用型 GPU 系統,該系統採用八路 NVIDIA HGX H100 及公司最新的液冷創新技術,進一步提高了密度和效率,推動了人工智能的發展。憑藉 Supermicro 領先業界、為綠色計算而設的 GPU 和 CPU 冷板、CDU(冷卻液分配裝置)和 CDM(冷卻液分配歧管),全新的 4U 通用型液冷 GPU 系統亦為八路 NVIDIA HGX H200 做好了準備,透過 Supermicro 完全集成的機架式液冷解決方案和 L10、L11 和 L12 驗證測試,將大幅减少數據中心的佔地面積、電力成本和部署障礙。 如欲了解更多資訊,請蒞臨 SC23 的 Supermicro 展台 關於 Super Micro Computer, Inc. Supermicro(納斯達克股票代碼:SMCI)為應用最佳化整體 IT 解決方案的全球領導者。Supermicro 的成立據點及營運中心位於美國加州聖荷西,致力為企業、雲端、人工智能和 5G 電信/邊緣 IT 基礎架構提供領先市場的創新技術。我們是一家提供伺服器、人工智能、儲存、物聯網、交換機系統、軟件和支援服務的全方位 IT 解決方案製造商。Supermicro 的主機板、電源和主機殼設計專業知識進一步促進了公司的研發與生產,為全球客戶實現了從雲端到邊緣的下一代創新。我們的產品皆由內部團隊所設計及製造(在美國、亞洲及荷蘭),透過全球化營運提供規模生產及展現絕佳效率,透過最佳化設計,降低整體擁有成本 (TCO),透過先進的綠色運算技術來減少對環境的衝擊。屢獲殊榮的 Server Building Block Solutions® 產品組合,讓客戶可以自由選擇這些具高度彈性、可重複使用且極為多元的建構式組合系統,我們支援各種外形尺寸、處理器、記憶體、GPU、儲存、網絡、電源和散熱解決方案(冷氣、自然冷卻或液冷),因此能為客戶的工作負載與應用提供最佳的效能。 Supermicro、Server Building Block Solutions 和 We Keep IT Green 皆為 Super Micro Computer, Inc. 的商標和/或註冊商標。 所有其他品牌、名稱和商標皆為其各自擁有者之財產。  

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 1012 加入收藏 :
Supermicro Expands Global Manufacturing Footprint Increasing Worldwide Rack Scale Manufacturing Capacity to 5,000 Fully Tested AI, HPC, and Liquid Cooling Rack Solutions Per Month

Increased Worldwide Rack Scale Manufacturing Capacity in the United States, Asia, the Netherlands, and Malaysia Contribute Towards Reduced Time to Delivery of the Latest AI and HPC Technologies with up to 100 kW/Rack SAN JOSE, Calif. and DENVER, Nov. 9, 2023 /PRNewswire/ -- Supercomputing Conference (SC23) -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Manufacturer for AI, Cloud, Storage, and 5G/Edge, is expanding its AI and HPC rack delivery capacity and advanced liquid cooling solutions. Worldwide, Supermicro's full rack scale delivery capacity is growing from several state-of-the-art integration facilities in the United States, Asia, Netherlands, and Malaysia. Future manufacturing expansion and locations are actively being considered to scale to the increasing demand for Supermicro's rack scale AI and HPC solution portfolio. Supermicro Rack Scale Solutions "With our global footprint, we now can deliver 5,000 racks per month to support substantial orders for fully integrated, liquid cooled racks, requiring up to 100kW per rack," said Charles Liang, president, and CEO of Supermicro. "We anticipate that up to 20% of new data centers will adopt liquid cooling solutions as CPUs and GPUs continue to heat up. Our leading rack scale solutions are in great demand with the development of AI technologies, an increasing part of data centers worldwide. Full rack scale and liquid cooling solutions should be considered early in the design and implementation process, which results in reduced delivery times to meet the urgent implementation requirements for AI and hyperscale data centers." Supermicro maintains an extensive inventory of "Golden SKUs" to meet fast delivery times for global deployments. Large CSPs and Enterprise Data Centers running the latest generative AI applications will quickly benefit from reduced delivery times worldwide. Supermicro's broad range of servers from the data center to the edge (IoT) can be seamlessly integrated, resulting in increased adoption and more engaged customers. With the recent announcement of the MGX product line, with the NVIDIA GH200 Grace™ Hopper™ Superchip and the NVIDIA Grace™ CPU Superchip, Supermicro continues to expand AI-optimized servers to the industry. Combined with the existing product line incorporating the LLM-optimized NVIDIA HGX 8-GPU solutions and NVIDIA L40S and L4 offerings, together with Intel Data Center MAX GPUs, Intel® Gaudi®2, and the AMD Instinct™ MI series GPUs, Supermicro can address the entire range of AI training and AI inferencing applications. The Supermicro All-Flash storage servers with NVMe E1.S and E3.S storage systems accelerate data access for various AI training applications, resulting in faster execution times. For HPC applications, the Supermicro SuperBlade, with GPUs, reduces the execution time for high-end simulations with reduced power consumption. Liquid cooling, when integrated into a data center, can reduce the data center PUE by up to 50% compared to existing industry averages. Reducing the power footprint and the resulting lower PUE in a data center significantly lowers operating expenditures when running generative AI or HPC simulations. With rack scale integration and deployment services from Supermicro, customers can start with proven reference designs for rapid installation while considering clients' unique business objectives. Clients can then work collaboratively with Supermicro-qualified experts to design optimized solutions for specific workloads. Upon delivery, the racks only need to be connected to power, networking, and the liquid cooling infrastructure, underscoring its seamless plug-and-play methodology. Supermicro is committed to delivering full data center IT solutions, including on-site delivery, deployment, integration, and benchmarking, to achieve optimal operational efficiency. Learn more about Supermicro's Rack Scale Design - https://www.supermicro.com/en/solutions/rack-integration Learn more about Supermicro's Liquid Cooling Solutions - https://www.supermicro.com/en/solutions/liquid-cooling Visit Supermicro at SC23 in Denver, Colorado. Learn More Here Supermicro's Wide Range of Servers Includes: SuperBlade® – Supermicro's high-performance, density-optimized, and energy-efficient multi-node platform optimized for AI, Data Analytics, HPC, Cloud, and Enterprise workloads. GPU Servers with PCIe GPUs – Systems supporting advanced accelerators to deliver dramatic performance gains and cost savings. These systems are designed for HPC, AI/ML, rendering, and VDI workloads. Universal GPU Servers – Open, modular, standards-based servers that provide superior performance and serviceability with GPU options, including the latest PCIe, OAM, and NVIDIA SXM technologies. Petascale Storage – Industry-leading storage density and performance with EDSFF E1.S and E3.S drives, allowing unprecedented capacity and performance in a single 1U or 2U chassis. Hyper – Flagship performance rackmount servers are built to take on the most demanding workloads along with the storage & I/O flexibility that provides a custom fit for a wide range of application needs. Hyper-E – Delivers the power and flexibility of our flagship Hyper family optimized for deployment in edge environments. Edge-friendly features include a short-depth chassis and front I/O, making Hyper-E suitable for edge data centers and telco cabinets. BigTwin® – 2U 2-Node or 2U 4-Node platform providing superior density, performance, and serviceability with dual processors per node and hot-swappable tool-less design. These systems are ideal for cloud, storage and media workloads. GrandTwin™ – Purpose-built for single-processor performance and memory density, featuring front (cold aisle) hot-swappable nodes and front or rear I/O for easier serviceability. FatTwin® – Advanced, high density multi-node 4U twin architecture with 8 or 4 single-processor nodes optimized for data center compute or storage density. Edge Servers – High-density processing power in compact form factors optimized for telco cabinet and Edge data center installation. Optional DC power configurations and enhanced operating temperatures up to 55° C (131° F). CloudDC – All-in-one platform for cloud data centers, with flexible I/O and storage configurations and dual AIOM slots (PCIe 5.0; OCP 3.0 compliant) for maximum data throughput. WIO – Offers a wide range of I/O options to deliver truly optimized systems for specific enterprise requirements. Mainstream – Cost-effective dual processor platforms for everyday enterprise workloads Enterprise Storage – Optimized for large-scale object storage workloads, utilizing 3.5" spinning media for high density and exceptional TCO. Front and front/rear loading configurations provide easy access to drives, while tool-less brackets simplify maintenance. Workstations – Delivering data center performance in portable, under-desk form factors, Supermicro workstations are ideal for AI, 3D design, and media & entertainment workloads in offices, research labs, and field offices. About Super Micro Computer, Inc. Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions manufacturer with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Asia, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).   Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.  All other brands, names, and trademarks are the property of their respective owners. Photo - https://mma.prnasia.com/media2/2271873/Supermicro_Rack_Scale_Solutions_1080x1080.jpg?p=medium600Logo - https://mma.prnasia.com/media2/1443241/Supermicro_Logo.jpg?p=medium600 

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 512 加入收藏 :
Supermicro基於NVIDIA GH200 Grace Hopper 超級晶片的伺服器開始出貨,為業界首款NVIDIA MGX系列產品

Supermicro, Inc.(納斯達克股票代碼:SMCI)為AI、雲端、儲存和 5G/邊緣領域的全方位 IT 解決方案製造商,近日發表了基於NVIDIA MGX參考架構的業界最廣泛GPU系統新產品組合之一,而這些系統搭載最新的NVIDIA GH200 Grace Hopper和NVIDIA Grace CPU超級晶片。新的模組化架構旨在標準化緊湊型1U和2U機箱內的AI基礎設施和加速運算,同時為當前和未來的GPU、DPU及CPU提供極高的靈活性和擴充性。Supermicro的先進液冷技術實現了高密度的硬體配置,例如搭載整合了高速連結技術的2個NVIDIA GH200 Grace Hopper超級晶片的1U 2節點配置。Supermicro每月可透過世界各地的製造廠交付數千個機櫃級AI伺服器,並確保隨插即用的相容性。 Supermicro總裁暨執行長梁見後表示:「Supermicro是推動現今AI革命的公認領導企業,並正透過助力資料中心轉型,以實現將AI用於處理多項工作負載的承諾。對我們來說,為正在快速發展的AI技術帶來高度模組化、可擴充性及通用性的系統至關重要。Supermicro基於NVIDIA MGX的解決方案不但證明我們的Building Block伺服器解決方案能使我們迅速將最新系統引進市場,在產業中也是具備極高工作負載最佳化性能的解決方案。透過與NVIDIA合作,我們正在協助加速合作企業產品上市時間,並助其在開發新AI應用程式的同時簡化部署作業及降低對環境的影響。這一系列新伺服器採用了針對AI最佳化的最新產業技術,包括NVIDIA GH200 Grace Hopper 超級晶片、BlueField和PCIe 5.0 EDSFF插槽。」 了解更多關於Supermicro基於NVIDIA MGX系統的產品:https://www.supermicro.com/mgx 註冊並觀看Supermicro MGX 研討會:https://www.brighttalk.com/webcast/17278/598459 NVIDIA超大規模和高速運算副總裁Ian Buck表示:「NVIDIA和Supermicro長期以來持續合作,共同打造了一些最高性能的人工智慧系統。NVIDIA MGX模組化參考設計搭配Supermicro在伺服器領域的專業,將創造搭載了我們的Grace和Grace Hopper超級晶片的新一代人工智慧系統,以造福全球的客戶和產業。」 Supermicro NVIDIA MGX 平台概覽 Supermicro的NVIDIA MGX平台旨在提供一系列應對未來AI技術需求的伺服器。這個新產品系列解決了AI伺服器所會面臨的獨特散熱、能耗和硬體挑戰。 新的Supermicro NVIDIA MGX伺服器產品線包括: Ÿ   ARS-111GL-NHR – 1個NVIDIA GH200 Grace Hopper超級晶片,氣冷 Ÿ   ARS-111GL-NHR-LCC – 1個NVIDIA GH200 Grace Hopper 超級晶片,液冷 Ÿ   ARS-111GL-DHNR-LCC – 2個NVIDIA GH200 Grace Hopper 超級晶片,2個節點,液冷 Ÿ   ARS-121L-DNR – 2個節點各有2個NVIDIA Grace CPU 超級晶片,共288個核心 Ÿ   ARS-221GL-NR – 2U機箱中配置1個NVIDIA Grace CPU 超級晶片 Ÿ   SYS-221GE-NR – 搭載雙路第四代Intel Xeon 可擴充處理器,以及最多4個NVIDIA H100 Tensor Core或4個NVIDIA PCle GPU 每個MGX平台都可透過NVIDIA BlueField®-3 DPU 及/或 NVIDIA ConnectX®-7互連實現高效能InfiniBand或乙太網路傳輸。 硬體規格 Supermicro的1U NVIDIA MGX 系統最多可配備2個NVIDIA GH200 Grace Hopper 超級晶片,包含2個NVIDIA H100 GPU和2個NVIDIA Grace CPU。每個系統為CPU提供480GB LPDDR5X記憶容量,以及為GPU提供96GB的HBM3或144GB的HBM3e記憶容量。記憶體一致,高頻寬、低延遲的NVIDIA-C2C互連技術可實現以每秒900GB的速度將CPU、GPU和記憶體進行連結,是PCIe 5.0的7倍。該系統的模組化架構提供多個PCIe 5.0 x16 FHFL插槽,以容納用於雲端和資料管理的DPU,同時支援額外的GPU、網路和儲存擴充。 具備2個NVIDIA GH200 Grace Hopper 超級晶片的1U 2節點設計及Supermicro成熟的Direct-to-Chip液冷解決方案可以降低營運支出(OPEX)超過40%,同時提高運算密度,簡化大型語言模型(LLM)叢集和高速運算應用的機櫃級部署。 2U機箱設計的Supermicro NVIDIA MGX平台支援NVIDIA Grace和x86架構的CPU,並可配置最多4個全尺寸資料中心GPU,如NVIDIA H100 PCIe、H100 NVL或L40S。該平台也提供三個用於I/O連接的額外PCIe 5.0 x16插槽,以及八個可熱插拔的EDSFF儲存裝置插槽。 Supermicro提供了NVIDIA網路,能保護並加速其MGX平台上的人工智慧運行作業。這包括NVIDIA BlueField-3 DPU,它能提供2倍的200Gb/s連接速度以加速用戶到雲端和資料儲存的存取——與ConnectX-7適配器,是能於GPU伺服器之間提供最高400Gb/s的InfiniBand或乙太網路連接速度的組合。 開發人員可以迅速運用這些新系統與NVIDIA軟體服務來處理各種不同產業的工作負載。該軟體服務包含NVIDIA AI Enterprise這是一款企業級軟體,能夠驅動NVIDIA AI平台並簡化生成式人工智慧、電腦視覺、語音人工智慧等生產就緒應用的開發和部署。此外,NVIDIA HPC軟體開發套件提供了推動科學運算進步所需的基本工具。 Supermicro NVIDIA MGX系統的各方面都旨在提高效率,包含從智慧散熱設計到零件選擇。NVIDIA Grace 超級晶片CPU擁有144個核心,相較於現今產業標準的x86 CPU,其每瓦效能提高了2倍。特定的Supermicro NVIDIA MGX系統可在1U機箱中配置2個節點,能夠搭載最多包括共288個核心的2組Grace CPU超級晶片,能在超大規模及邊緣資料中心提供突破性的運算密度和能源效率。 了解更多訊息,請參考Supermicro NVIDIA MGX Systems網頁:https://www.supermicro.com/mgx

文章來源 : 香港商霍夫曼公關顧問股份有限公司 發表時間 : 瀏覽次數 : 7453 加入收藏 :
2025 年 2 月 20 日 (星期四) 農曆正月廿三日
首 頁 我的收藏 搜 尋 新聞發佈