本網站使用瀏覽器紀錄 (Cookies) 來提供您最好的使用體驗,我們使用的 Cookie 也包括了第三方 Cookie。相關資訊請訪問我們的隱私權與 Cookie 政策。如果您選擇繼續瀏覽或關閉這個提示,便表示您已接受我們的網站使用條款。 關閉
本次推出的4U與2-OU(OCP)液冷NVIDIA HGX B300系統,適用於高密度超大規模設施與AI工廠的部署。這兩款機型基於Supermicro Data Center Building Block Solutions®架構,並分別採用DLC-2與DLC散熱技術 4U液冷NVIDIA HGX B300系統可適用於標準的19吋EIA機架,可在每機架內支援最高64個GPU,並能透過DLC-2(直接液冷)技術為最高98%的系統熱能進行冷卻 緊湊、高能效的2-OU(OCP)NVIDIA HGX B300 8-GPU系統針對21吋OCP Open Rack V3(ORV3)規格所設計,可在單一機架內支援最高144個GPU 加州聖荷西2025年12月12日 /美通社/ -- Super Micro Computer, Inc.(NASDAQ:SMCI)作為AI、雲端、儲存和5G/邊緣領域的全方位IT解決方案供應商,宣佈擴大NVIDIA Blackwell架構產品系列,推出並開始供貨全新4U與2-OU(OCP)液冷NVIDIA HGX B300系統。這些最新機型為Supermicro資料中心建構組件解決方案(Data Center Building Block Solutions,DCBBS)的重要元件,可為超大規模資料中心與AI工廠部署提供空前的GPU密度與能源效率。 B300 liquid cooled systems Supermicro總裁暨執行長梁見後表示:「全球的AI基礎設施需求正迅速提升,而我們的全新液冷式NVIDIA HGX B300系統,可提供現今超大規模運算設施、與AI工廠所需的效能密度與能源效率。我們推出了業界最緊湊的NVIDIA HGX B300解決方案,可在單一機架中支援最高144個GPU,並能透過我們經認證的直接液冷技術降低電力消耗與散熱成本。這是Supermicro藉由DCBBS技術,助力客戶進行大規模AI部署的模式:更快的上市時程、最大化的每瓦效能,以及從設計到部署的端到端整合。」 了解更多:https://www.supermicro.com/en/accelerators/nvidia 全新2-OU(OCP)液冷NVIDIA HGX B300系統是基於21吋OCP Open Rack V3(ORV3)規格所打造,可助力超大規模運算設施與雲端服務供應商,在單一機架中支援最高144個GPU,實現最大化GPU密度,並節省機房空間,同時不影響維護作業。此機架級設計具備盲插式(Blind-Mate)冷卻液歧管、模組化GPU/CPU托盤(Tray)架構,以及先進的元件液冷解決方案。此系統透過8個NVIDIA Blackwell Ultra GPU(每個最高1,100W TDP)運行AI工作負載,同時可大幅降低機架占用空間與電力消耗。單一ORV3機架可支援最多18個節點及144個GPU,並可透過NVIDIA Quantum-X800 InfiniBand交換器、及Supermicro 1.8MW機架列間式(In-Row)冷卻液分配單元(CDU)進行無縫式擴充。8個NVIDIA HGX B300運算機架、3個NVIDIA Quantum-X800 InfiniBand網路機架,以及2個Supermicro機架列間式 CDU,可整合成一組搭載1,152個GPU的超級叢集可擴充式單元。 4U前置I/O式HGX B300液冷系統作為2-OU(OCP)系統的對應機型,可在傳統式19吋EIA機架架構內提供相同的運算效能,適用於大規模AI工廠部署。此4U系統採用Supermicro DLC-2技術,能透過液冷式配置為最高98%的系統熱能進行散熱,進而為密集式訓練與推論叢集實現更佳的能源效率與可維護性,以及更低的噪音等級。 Supermicro NVIDIA HGX B300系統透過每台機組的2.1TB HBM3e GPU記憶體大幅提升效能,能以系統層級運行更大規模的模型。而2-OU(OCP)與4U平台皆可帶來叢集等級的高度效能提升,並能透過整合NVIDIA ConnectX®-8 SuperNIC,以及搭配NVIDIA Quantum-X800 InfiniBand或NVIDIA Spectrum-4 Ethernet,使運算網路處理量提升至兩倍(最高可達800Gb/s)。這些優勢可加速大量的AI工作負載,如代理式AI(Agentic AI)應用、基礎模型訓練,以及AI工廠內多模態的大規模推論。 Supermicro基於客戶在總體擁有成本(TCO)、可維護性與效率等方面的核心需求,開發了這些平台。DLC-2技術堆疊可使資料中心減少最高可達40%的電力消耗1,並透過45°C溫水運行降低用水量,以及省去對冷卻水與壓縮機的需求。Supermicro DCBBS技術,可在出貨前對這些新系統進行L11與L12的機架式完整驗證與測試,協助超大規模設施、企業級與聯邦政府客戶加速啟動上線。 這些新推出的系統擴大了Supermicro的完善NVIDIA Blackwell平台產品系列。該系列包括NVIDIA GB300 NVL72、NVIDIA HGX B200,以及NVIDIA RTX PRO 6000 Blackwell伺服器版本。Supermicro的這些NVIDIA認證系統均經過完整測試,能為多元AI應用與情境驗證其最佳效能,並整合了NVIDIA網路技術及NVIDIA AI軟體,包括NVIDIA AI Enterprise與NVIDIA Run:ai。此產品系列可為客戶提供高度靈活性,順利打造從單一節點到完整AI工廠的基礎設施。 註1:Supermicro Liquid Cooling Solutions ### 關於Super Micro Computer, Inc. Supermicro(納斯達克股票代碼:SMCI)為應用最佳化全方位IT解決方案的全球領導者。Supermicro的成立據點及營運中心位於美國加州聖荷西,致力為企業、雲端、AI和5G電信/邊緣IT基礎架構提供領先市場的創新技術。我們是全方位IT解決方案製造商,提供伺服器、AI、儲存、物聯網、交換器系統、軟體及支援服務。Supermicro的主機板、電源和機箱設計專業進一步推動了我們的發展與產品生產,為全球客戶實現了從雲端到邊緣的下一代創新。我們的產品皆由企業內部團隊設計及製造(在美國、亞洲及荷蘭),透過全球化營運實現極佳的規模與效率,並藉營運最佳化降低總體擁有成本(TCO),以及經由綠色運算技術減少環境衝擊。屢獲殊榮的Server Building Block Solutions®產品組合,讓客戶可以自由選擇這些具高度彈性、可重複使用且極為多元的建構式組合系統,我們支援各種外形尺寸、處理器、記憶體、GPU、儲存、網路、電源和散熱解決方案(空調、自然氣冷或液冷),因此能為客戶的工作負載與應用提供最佳的效能。 Supermicro、Server Building Block Solutions和We Keep IT Green皆為Super Micro Computer, Inc. 的商標和/或註冊商標。所有其他品牌、名稱和商標皆為其各自所有者之財產。
Super Micro Computer, Inc.(NASDAQ:SMCI)作為AI、雲端、儲存和5G/邊緣領域的全方位IT解決方案供應商,宣佈擴大NVIDIA Blackwell架構產品系列,推出並開始供貨全新4U與2-OU(OCP)液冷NVIDIA HGX B300系統。這些最新機型為Supermicro資料中心建構組件解決方案(Data Center Building Block Solutions,DCBBS)的重要元件,可為超大規模資料中心與AI工廠部署提供空前的GPU密度與能源效率。 Supermicro總裁暨執行長梁見後表示:「全球的AI基礎設施需求正迅速提升,而我們的全新液冷式NVIDIA HGX B300系統,可提供現今超大規模運算設施、與AI工廠所需的效能密度與能源效率。我們推出了業界最緊湊的NVIDIA HGX B300解決方案,可在單一機架中支援最高144個GPU,並能透過我們經認證的直接液冷技術降低電力消耗與散熱成本。這是Supermicro藉由DCBBS技術,助力客戶進行大規模AI部署的模式:更快的上市時程、最大化的每瓦效能,以及從設計到部署的端到端整合。」 了解更多:https://www.supermicro.com/en/accelerators/nvidia 全新2-OU(OCP)液冷NVIDIA HGX B300系統是基於21吋OCP Open Rack V3(ORV3)規格所打造,可助力超大規模運算設施與雲端服務供應商,在單一機架中支援最高144個GPU,實現最大化GPU密度,並節省機房空間,同時不影響維護作業。此機架級設計具備盲插式(Blind-Mate)冷卻液歧管、模組化GPU/CPU托盤(Tray)架構,以及先進的元件液冷解決方案。此系統透過8個NVIDIA Blackwell Ultra GPU(每個最高1,100W TDP)運行AI工作負載,同時可大幅降低機架占用空間與電力消耗。單一ORV3機架可支援最多18個節點及144個GPU,並可透過NVIDIA Quantum-X800 InfiniBand交換器、及Supermicro 1.8MW機架列間式(In-Row)冷卻液分配單元(CDU)進行無縫式擴充。8個NVIDIA HGX B300運算機架、3個NVIDIA Quantum-X800 InfiniBand網路機架,以及2個Supermicro機架列間式 CDU,可整合成一組搭載1,152個GPU的超級叢集可擴充式單元。 4U前置I/O式HGX B300液冷系統作為2-OU(OCP)系統的對應機型,可在傳統式19吋EIA機架架構內提供相同的運算效能,適用於大規模AI工廠部署。此4U系統採用Supermicro DLC-2技術,能透過液冷式配置為最高98%的系統熱能進行散熱,進而為密集式訓練與推論叢集實現更佳的能源效率與可維護性,以及更低的噪音等級。 Supermicro NVIDIA HGX B300系統透過每台機組的2.1TB HBM3e GPU記憶體大幅提升效能,能以系統層級運行更大規模的模型。而2-OU(OCP)與4U平台皆可帶來叢集等級的高度效能提升,並能透過整合NVIDIA ConnectX®-8 SuperNIC,以及搭配NVIDIA Quantum-X800 InfiniBand或NVIDIA Spectrum-4 Ethernet,使運算網路處理量提升至兩倍(最高可達800Gb/s)。這些優勢可加速大量的AI工作負載,如代理式AI(Agentic AI)應用、基礎模型訓練,以及AI工廠內多模態的大規模推論。 Supermicro基於客戶在總體擁有成本(TCO)、可維護性與效率等方面的核心需求,開發了這些平台。DLC-2技術堆疊可使資料中心減少最高可達40%的電力消耗1,並透過45°C溫水運行降低用水量,以及省去對冷卻水與壓縮機的需求。Supermicro DCBBS技術,可在出貨前對這些新系統進行L11與L12的機架式完整驗證與測試,協助超大規模設施、企業級與聯邦政府客戶加速啟動上線。 這些新推出的系統擴大了Supermicro的完善NVIDIA Blackwell平台產品系列。該系列包括NVIDIA GB300 NVL72、NVIDIA HGX B200,以及NVIDIA RTX PRO 6000 Blackwell伺服器版本。Supermicro的這些NVIDIA認證系統均經過完整測試,能為多元AI應用與情境驗證其最佳效能,並整合了NVIDIA網路技術及NVIDIA AI軟體,包括NVIDIA AI Enterprise與NVIDIA Run:ai。此產品系列可為客戶提供高度靈活性,順利打造從單一節點到完整AI工廠的基礎設施。
Engineered for the AI era, MIMO delivers breakthrough metrics: 400 GB/s bandwidth, 54 million IOPS, and 40–90 μs latency—all within a form factor comparable to a large suitcase. MIMO serves as both a high-performance data hub for large-scale GPU clusters and a flexible edge deployment platform, extending seamlessly to desktop environments where it orchestrates workflows with various DGX Spark units based on NVIDIA's GB10 Grace Blackwell superchip. This architecture enables independent AI clusters supporting up to 16 computing nodes within constrained environments, managing the complete workflow from large-scale pre-training and fine-tuning to production inference—effectively democratizing enterprise-grade AI capabilities for labs, edge sites, and distributed teams. SINGAPORE - Media OutReach Newswire - 12 December 2025 – "Maximize input, maximize output, perfect—this is the architectural breakthrough we've been waiting for. I never expected to witness it here in the East". At the recently concluded China Hi-Tech Fair (CHTF), a seasoned AI architect from New York shared this reflection after examining what has been termed a "new species of storage"—the MIMO system at the Ridger booth. Professor Zhang Sheng from Tsinghua University Shenzhen International Graduate School expressed a more pragmatic view: "With this solution, we finally no longer have to rely on the university's data center. Our current annual budget alone is enough to deploy an AI cluster within our lab that better fits our needs—this will significantly boost our research efficiency both technically and operationally. It's truly fantastic news." Asia Debut Marks Industry Inflection Point Earlier at the 8th China International Import Expo (CIIE), the solution's core—the world's first AI-native storage system MIMO—made its strategic Asian debut. Engineered for the AI era, MIMO delivers breakthrough metrics: 400 GB/s bandwidth, 54 million IOPS, and 40–90 μs latency—all within a form factor comparable to a large suitcase. The platform's defining Fast-Light-Edge proposition, delivered through its breakthrough architecture, cut through the exhibition noise, generating immediate and widespread attention. MIMO earned exclusive features in top-tier media including Hong Kong Ta Kung Pao and China Securities Journal, while its product demonstration videos gained rapid traction across leading digital channels. Addressing Foundational Challenges: Technical Dialogues That Matter During the exhibitions, technical leaders from the United States, Spain, Singapore, Colombia, the UAE (Dubai), India, Pakistan, and Hong Kong SAR engaged in substantive dialogues with Ridger's Asia team, raising questions that revealed systemic industry gaps: Architectural Transformation & Strategic Positioning "Can MIMO fundamentally replace legacy storage architectures—traditional NAS, unified, distributed, and parallel file systems—to deliver accelerated parallel training and high-concurrency inference?" "With such exceptional performance, would deploying MIMO for traditional enterprise applications represent strategic overinvestment or forward-looking infrastructure?" Mobile Deployment & Borderless Operations "MIMO's suitcase-sized footprint suggests unprecedented mobility. Can it truly accompany research teams globally like standard equipment? How does it maintain operational continuity across jurisdictions? What's the customs protocol for such 'technical luggage'?" Seamless Integration & Global Accessibility "In scenarios with unnetworked AI servers, can MIMO rapidly establish dedicated training environments with true plug-and-play functionality?" "Does MIMO integrate transparently with existing AI infrastructure and software stacks without requiring modifications?" "Beyond Asia-Pacific, what's the procurement pathway for MIMO? Which currencies and payment methods are accommodated?" Architectural Breakthrough: Redefining What's Possible Addressing these operational realities, Ridger demonstrated MIMO's system-level value—transcending its role as a storage device to become an architectural cornerstone. MIMO serves as both a high-performance data hub for large-scale GPU clusters and a flexible edge deployment platform, extending seamlessly to desktop environments where it orchestrates workflows with various DGX Spark units based on NVIDIA's GB10 Grace Blackwell superchip. Notably, eight global OEM partners—including Dell, HPE, Lenovo, xFusion, H3C, MSI, GIGABYTE, and Acer—have concurrently launched Spark versions based on NVIDIA's GB10 Grace Blackwell superchip, creating a robust compatibility foundation for MIMO's ecosystem integration. This architecture enables independent AI clusters supporting up to 16 computing nodes within constrained environments, managing the complete workflow from large-scale pre-training and fine-tuning to production inference—effectively democratizing enterprise-grade AI capabilities for labs, edge sites, and distributed teams. As Zhu Ting, an industry observer from Beijing, noted: "This represents the 'IBM PC moment' for AI infrastructure—transforming specialized capability into accessible utility." Market Validation Through Early Adoption Market response has been decisive. Following the exhibitions, pioneering organizations across pathological image foundation model development, legal-tech innovation, industrial visual inspection, and naked-eye 3D content production have joined Ridger's Early Access program, validating the architecture's transformative potential in real-world operational contexts. Global Rollout: Accelerating Accessibility Responding to accelerating global demand, Ridger confirmed the imminent launch of the complete MIMO portfolio and optimized solution bundles for specific DGX Spark configurations through the Ridger Official Global Store. Designed as a frictionless procurement channel, the platform will support diverse payment options including multiple fiat currencies and cryptocurrencies—streamlining access to advanced AI infrastructure. Organizations seeking deeper understanding of MIMO and its integrated lightweight AI solution with DGX Spark are invited to connect with Ridger team or its strategic partner, NVIDIA Elite Solution Partner SinoInfo. Hashtag: #Technology #ESG #AI #GPU #Enterprise #Finance #Storage #Flash #Compute #DGX-Spark #NVIDIA #AI-Lab #GDS #NAS #AI-Nativehttps://ridger.tech/https://www.linkedin.com/company/ridger/https://www.youtube.com/@ridgertechThe issuer is solely responsible for the content of this announcement.RidgerRidger is a global technology pioneer building next-generation computing & storage infrastructure for the AI era. Born in the East and operating worldwide, Ridger challenges conventional paths to create new technological paradigms. The team unites seasoned experts from global storage leaders with visionary AI architects, all driven by a shared mission to democratize cutting-edge technology, rejects incremental improvements and hollow prestige, focusing exclusively on foundational breakthroughs that deliver tangible value and sustainable impact. From architecture to implementation and from service to empowerment, Ridger provides end-to-end solutions that help clients worldwide ascend to their highest summits.
Hewlett Packard Enterprise(NYSE: HPE)宣布擴大NVIDIA AI Computing by HPE產品組合,推出兼具安全性與可擴展的全新AI工廠解決方案,並導入全新AI資料中心互連技術,可優化長距離與多雲環境中AI叢集的工作負載效能。同時,HPE與NVIDIA也攜手成立歐盟首座AI工廠實驗室,協助全球客戶驗證與測試其主權AI工廠架構。 「HPE與NVIDIA持續為不同規模的安全AI工廠奠定基礎,並透過創新技術為多元工作負載帶來前所未有的的效能表現,」HPE總裁暨執行長Antonio Neri表示。「結合雙方各自的技術優勢,共同打造完整的全堆疊AI基礎架構,協助企業在多樣化的工作負載中提升效能。」 「每個國家與企業都需掌握『智慧產能』的自主權,」NVIDIA創辦人暨執行長黃仁勳表示。「我們將資料中心轉型為AI工廠,為全新工業革命打造的智慧製造基地。透過將NVIDIA的全堆疊加速運算平台與Spectrum X乙太網路技術整合至HPE解決方案中,我們共同打造主權AI的典範。全新的AI工廠實驗室將為客戶提供安全、可大規模運行的環境,將資料轉化為可真正創造價值的AI應用。」 HPE協助客戶提升AI基礎架構與資料自主性HPE與NVIDIA在法國格勒諾布爾(Grenoble)成立全新AI工廠實驗室(AI Factory Lab),協助客戶在採用氣冷技術的主權AI工廠環境中,進行AI工作附載的測試與調校。實驗室配備最新版本且符合政府需求的NVIDIA AI Enterprise軟體、HPE伺服器、HPE Juniper Networking PTX與MX系列路由器、NVIDIA加速運算平台、NVIDIA Spectrum X乙太網路技術,和HPE Alletra儲存設備。全新實驗室環境可讓客戶在歐盟境內運作基礎架構、驗證效能,同時支援區域內的大規模AI部署,並協助在歐盟營運的全球企業應對資料主權與法規遵循需求。此外,HPE亦與Carbon3.ai合作在英國曼徹斯特設立私有雲AI實驗室(Private AI Lab),整合HPE Private Cloud AI、NVIDIA AI Enterprise軟體套件以及NVIDIA AI基礎架構,協助英國企業加速導入AI。 HPE擴充Private Cloud AI功能,強化資料與營運主權隨著歐洲市場日益重視資料主權(Data Sovereignty)與營運主權(Operational Sovereignty),並希望能透過安全的私有雲基礎架構更輕鬆地部署 AI,HPE Private Cloud AI亦推出多項新配置、應用情境與功能,包括:• HPE Private Cloud AI支援NVIDIA RTX PRO 6000 Blackwell Server Edition GPU以及NVIDIA Hopper,讓客戶能依需求選擇GPU,靈活支援多樣化工作負載• 整合經STIG強化與啟用FIPS的NVIDIA AI Enterprise軟體,並可部署於隔離式環境中,進一步強化資安層級,打造符合多項符合全球與產業合規標準的解決方案 • HPE Private Cloud現支援NVIDIA多執行個體GPU(MIG)技術的GPU分隔功能,以優化資源使用率並降低營運成本• World Wide Technology(WWT)、NVIDIA與HPE共同推出全新Datacenter Ops Agent,簡化AI資料中心日常作業,並加強HPE在代理型AI與混合雲環境中的統一營運能力 HPE的主權AI工廠解決方案採用全新系統設計,納入各國法規與產業合規要求,讓企業更有效因應不同市場的當地法規,降低導入AI 基礎架構的合規門檻。透過可彈性調整的架構,HPE樹立了新的安全標準,在符合各地法規的同時,亦能維持並強化整體安全性。全新推出的工程驗證參考架構,不僅具備更強大的軟體安全措施,也導入專為支援合規稽核而設計的系統功能。此外,HPE網路安全服務團隊提供涵蓋諮詢、顧問輔導到專業管理服務的一站式支援,能協助受高度監管產業的資安與法規需求,並可結合NVIDIA技術以達成主權合規的解決方案。 HPE與NVIDIA加速AI工廠資料與資料中心效能HPE的AI工廠網路解決方案包含NVIDIA Spectrum X乙太網路平台與NVIDIA BlueField-3資料處理單元(DPU),可協助企業在資料中心間及資料中心與雲端間等多種生產場景中加速網路效能。HPE亦將AI工廠解決方案擴展至HPE Juniper Networking,運用HPE Juniper Networks MX與PTX高速路由平台,讓使用者、裝置與代理程式能以高效能、安全且低延遲的方式連接AI工廠,並支援跨長距離或多雲環境部署的叢集互連。 隨著AI技術愈發成熟,企業需要以更智慧、更具整合性且安全的方式部署AI,同時優化分散式資料的儲存與管理,從中創造更高價值。為滿足此需求,HPE宣布推出HPE Alletra Storage MP X10000 Data Intelligence Node。此新一代架構奠基於HPE Data Fabric統一邊緣、核心與雲端資料存取的能力,其將X10000轉型為主動資料層,將NVIDIA加速運算引入資料所在位置,並於AI管線中即時豐富與強化資料,實現即時智慧分析。 透過採用NVIDIA AI Data Platform參考設計,並在資料路徑中直接執行NVIDIA AI Enterprise軟體,X10000可作為動態運作引擎,在資料匯入時即時分析,並自動推論出AI工廠所需的資料模式。這套強大的解決方案能夠即時處理、分類並優化資料,並透過RDMA技術支援與S3相容的儲存,同時運用NVIDIA加速技術提升效能。 HPE的另一項創新產品為NVIDIA GB200 NVL4 by HPE,提供相較於大型AI與HPC平台更精巧、能源效率更高的替代方案,協助企業快速且安全地部署高效能AI推論,以支援大型語言模型(LLM)與其他生成式AI應用。NVIDIA GB200 NVL4 by HPE搭載兩顆NVIDIA Grace CPU與四顆NVIDIA Blackwell GPU,每機架可提供高達136顆GPU的超高密度,為企業提供高效能運算且節能的解決方案。 CrowdStrike、NVIDIA與HPE攜手打造統一AI資安防護HPE選擇業界領先的CrowdStrike作為HPE Private Cloud AI的安全平台,在混合雲與多雲環境中統一端點、身分、雲端與資料防護。此合作不僅延續了HPE與Unleash AI合作夥伴CrowdStrike在端對端AI安全的創新防護,包括保護採用NVIDIA技術加速的大型語言模型(LLM),更奠基於CrowdStrike與NVIDIA的既有夥伴關係,透過持續運作的AI資安代理程式,提升企業安全且大規模運行AI的能力。 HPE與Fortanix合作導入NVIDIA機密運算HPE正與Fortanix合作,運用NVIDIA機密運算(Confidential Computing)與Fortanix Armet AI,打造一站式平台,使企業在AI工廠與受高度監管環境中安全地執行主權式代理型AI。作為Unleash AI新夥伴,Fortanix的技術可部署於HPE Private Cloud AI與搭載NVIDIA RTX PRO 6000 Blackwell Server Edition GPU的HPE ProLiant Compute DL380a Gen12伺服器上,協助歐洲、中東與非洲地區與全球企業在地端、雲端或AI工廠中建置並執行安全、可擴充的AI工作負載。 上市時程: • 位於格勒諾布爾的AI工廠實驗室將於2026年第二季啟用• NVIDIA GB200 NVL4現已開放訂購• 主權AI工廠解決方案現已開放訂購• HPE Alletra Storage MP X10000 Data Intelligence Node將於2026年一月開放訂購
Introducing 4U and 2-OU (OCP) liquid-cooled NVIDIA HGX B300 systems for high-density hyperscale and AI factory deployments, supported by Supermicro Data Center Building Block Solutions® with DLC-2 and DLC technology, respectively 4U liquid-cooled NVIDIA HGX B300 systems designed for standard 19-inch EIA racks with up to 64 GPUs per rack, capturing up to 98% of system heat through DLC-2 (Direct Liquid-Cooling) technology Compact and power-efficient 2-OU (OCP) NVIDIA HGX B300 8-GPU system designed for 21-inch OCP Open Rack V3 (ORV3) specification with up to 144 GPUs in a single rack SAN JOSE, Calif., Dec. 10, 2025 /PRNewswire/ -- Super Micro Computer, Inc. (SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, today announced the expansion of its NVIDIA Blackwell architecture portfolio with the introduction and shipment availability of new 4U and 2-OU (OCP) liquid-cooled NVIDIA HGX B300 systems. These latest additions are a key part of Supermicro's Data Center Building Block Solutions (DCBBS) that deliver unprecedented GPU density and power efficiency for hyperscale data centers and AI factory deployments. B300 liquid cooled systems "With AI infrastructure demand accelerating globally, our new liquid-cooled NVIDIA HGX B300 systems deliver the performance density and energy efficiency that hyperscalers and AI factories need today," said Charles Liang, president and CEO of Supermicro. "We're now offering the industry's most compact NVIDIA HGX B300 solutions—achieving up to 144 GPUs in a single rack—while reducing power consumption and cooling costs through our proven direct liquid-cooling technology. Through our DCBBS, this is how Supermicro enables our customers to deploy AI at scale: faster time-to-market, maximum performance per watt, and end-to-end integration from design to deployment." For more information, please visit https://www.supermicro.com/en/accelerators/nvidia The 2-OU (OCP) liquid-cooled NVIDIA HGX B300 system, built to the 21-inch OCP Open Rack V3 (ORV3) specification, enables up to 144 GPUs per rack to deliver maximum GPU density for hyperscale and cloud providers requiring space-efficient racks without compromising serviceability. The rack-scale design features blind-mate manifold connections, modular GPU/CPU tray architecture, and state-of-the-art component liquid cooling solutions. The system propels AI workloads with eight NVIDIA Blackwell Ultra GPUs at up to 1,100W TDP each, while dramatically reducing rack footprint and power consumption. A single ORV3 rack supports up to 18 nodes with 144 GPUs total, scaling seamlessly with NVIDIA Quantum-X800 InfiniBand switches and Supermicro's 1.8MW in-row coolant distribution units (CDUs). Combined, eight NVIDIA HGX B300 compute racks, three NVIDIA Quantum-X800 InfiniBand networking racks, and two Supermicro in-row CDUs form a SuperCluster scalable unit with 1,152 GPUs. Complementing the 2-OU (OCP) model, the 4U Front I/O HGX B300 Liquid-Cooled System offers the same compute performance in a traditional 19-inch EIA rack form factor for large-scale AI factory deployments. The 4U system leverages Supermicro's DLC-2 technology to capture up to 98% of heat generated1 by the system through liquid-cooling, achieving superior power efficiency with lower noise and greater serviceability for dense training and inference clusters. Supermicro NVIDIA HGX B300 systems unlock substantial performance speedups, with 2.1TB of HBM3e GPU memory per system to handle larger model sizes at the system level. Above all, both the 2-OU (OCP) and 4U platforms deliver significant performance gains at the cluster level by doubling compute fabric network throughput up to 800Gb/s via integrated NVIDIA ConnectX®-8 SuperNICs when used with NVIDIA Quantum-X800 InfiniBand or NVIDIA Spectrum-4 Ethernet. These improvements accelerate heavy AI workloads such as agentic AI applications, foundation model training, and multimodal large scale inference in AI factories. Supermicro developed these platforms to address key customer requirements for TCO, serviceability, and efficiency. With the DLC-2 technology stack, data centers can achieve up to 40 percent power savings1, reduce water consumption through 45°C warm water operation and eliminate chilled water and compressors in data centers. Supermicro DCBBS delivers the new systems as fully validated, tested racks ready as L11 and L12 solutions before shipment, accelerating time-to-online for hyperscale, enterprise, and federal customers. These new systems expand Supermicro's broad portfolio of NVIDIA Blackwell platforms — including the NVIDIA GB300 NVL72, NVIDIA HGX B200, and NVIDIA RTX PRO 6000 Blackwell Server Edition. Each of these NVIDIA-Certified Systems from Supermicro are tested to validate optimal performance for a wide range of AI applications and use cases – together with NVIDIA networking and NVIDIA AI software, including NVIDIA AI Enterprise and NVIDIA Run:ai. This provides customers with flexibility to build AI infrastructure that scales from a single node to full-stack AI factories. 1https://www.supermicro.com/en/solutions/liquid-cooling About Super Micro Computer, Inc. Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Asia, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling). Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc. All other brands, names, and trademarks are the property of their respective owners.
TOKYO, Dec. 3, 2025 /PRNewswire/ -- Orbbec, a leading provider of robotics and AI vision, and Advantech, a global leader in edge computing, today debut their collaborative platform powered by NVIDIA Jetson Thor at the International Robot Exhibition (iREX) in Tokyo, marking their first joint exhibition at one of the world's largest robotics trade show. The partnership signals a strategic alignment to deliver an integrated AI vision and compute ecosystem for Japan's robotics and automation industry. At iREX 2025, Orbbec (AI vision) and Advantech (edge computing) debut their collaborative, integrated Physical AI platform—accelerated by NVIDIA Jetson Thor—to deliver next-generation robotics solution. At booth W3-49, visitors can witness a humanoid robot demonstration accelerated by Advantech's MIC-742-AT and NVIDIA Jetson Thor, integrated with Orbbec's Gemini 330 series 3D camera. The embedded NVIDIA Jetson T5000 module with 2,070 TFLOPS of AI compute enables robots to run large transformer models and vision-language-action models in real time, bridging advanced AI reasoning with Orbbec's depth perception technology for real-world physical intelligence. Under iREX 2025's theme "Sustainable Societies Through Robotics", the collaboration delivers synchronized multi-camera configurations for complex applications spanning warehouse logistics, collaborative manufacturing, and human-robot interaction. "Our partnership with Advantech delivers production-ready solutions with consistent quality and dependable availability," said Felix Zheng, Managing Director, APAC Region at Orbbec. " Orbbec's proven OEM/ODM capabilities and global supply network ensure partners can reliably scale from prototype to mass production." "Advantech is committed to empowering the era of Physical AI through robust edge computing and NVIDIA accelerated computing," said Magic Pao, Associate Vice President at Advantech. "Together with vision partners like Orbbec, we enable next-generation robotics and intelligent systems that seamlessly connect perception, reasoning, and action." Advantech presents the MIC-735, accelerated by industrial-grade NVIDIA IGX Thor, delivering deterministic AI computing for real-time robotic control and functional safety applications. Alongside the AMR DevKit and SKY-602E3 GPU Server, Advantech showcases an end-to-end architecture that seamlessly bridges real-world perception with virtual simulation and AI reasoning. In addition, AFE-A702, a Robotic Control System, is also presented by Advantech. It supports real-time AI reasoning and inference with GPU-accelerated SLAM—enabling advanced AMRs and next-generation robotics. Complementing this, Orbbec demonstrates its latest 3D vision technologies, including a live comparison of the Gemini 435Le stereo camera against a leading international model and the new Pulsar ME450 multi-pattern LiDAR, which seamlessly switches between scan modes—from fast obstacle avoidance to high-density 3D mapping —illustrating how advanced perception and edge computing converge to empower next-generation robotics. Join us at iREX 2025 (Booth W3-49) to see how Orbbec and Advantech bring Physical AI to next-generation robots.
A12 藝術空間
NVIDIA
請先登入後才能發佈新聞。
還不是會員嗎?立即 加入台灣產經新聞網會員 ,使用免費新聞發佈服務。 (服務項目) (投稿規範)