關於 cookie 的說明

本網站使用瀏覽器紀錄 (Cookies) 來提供您最好的使用體驗,我們使用的 Cookie 也包括了第三方 Cookie。相關資訊請訪問我們的隱私權與 Cookie 政策。如果您選擇繼續瀏覽或關閉這個提示,便表示您已接受我們的網站使用條款。

搜尋結果Search Result

符合「HPC」新聞搜尋結果, 共 17 篇 ,以下為 1 - 17 篇 訂閱此列表,掌握最新動態
Innodisk Unveils the New "AI on Dragonwing" Computing Series with the First EXMP-Q911 COM-HPC Mini Module Powered by Qualcomm's SoC

The module integrates a high-efficiency AI SoC with in-house software and peripherals to serve as a powerful Edge AI platform. TAIPEI, Jan. 6, 2026 /PRNewswire/ -- Innodisk, a leading global AI solution provider, announced the launch of its new AI on Dragonwing computing series, developed in collaboration with Qualcomm Technologies, Inc[1]. The flagship EXMP-Q911 COM-HPC Mini module delivers up to 100 TOPS of AI performance while maintaining low power consumption and wide-temperature reliability from -40°C to 85°C. The Qualcomm Dragonwing™ SoCs also offers longevity support through 2038, ensuring supply stability for long-term industrial deployments. As the first product line within Innodisk's AI on ARM portfolio, the series opens a new chapter for customers seeking sustainable and scalable ARM-based Edge AI solutions. Innodisk unveils the New “AI on Dragonwing” computing series with the first EXMP-Q911 COM-HPC mini module powered by Qualcomm’s SoC The AI on ARM portfolio marks a key milestone in the collaboration between Innodisk and Qualcomm. The efficient architecture of the Qualcomm Dragonwing™ SoCs delivers exceptional AI inference TOPS within strict thermal and power constraints in edge environments. Combined with Innodisk's extensive in-house expertise in driver porting and peripheral integration, the AI on ARM line-up elevates the SoC's capabilities, making it a robust foundation for industrial edge AI deployments. This new line-up also reflects a deep co-development effort from both companies, uniting joint hardware–software engineering and early-stage system design to drive next-generation edge intelligence. This collaboration centers on the EXMP-Q911 COM-HPC Mini module, powered by the Qualcomm Dragonwing™ IQ-9075 processor featuring an 8-core Kryo Gen 6 CPU and an Adreno 663 GPU, supporting 100 TOPS of AI performance. Based on defined test scenarios[1], the EXMP-Q911 can achieve up to 10× higher AI inference FPS relative to similar modules. The EXMP-Q911 integrates 36GB of LPDDR5X memory and 128GB of UFS 3.1 storage, along with a rich set of interfaces, including PCIe Gen4, USB 3.2, dual 2.5GbE LAN, dual DP1.2, MIPI CSI-2, CAN FD, and more, delivering strong connectivity for compact, performance-demanding edge systems. Beyond hardware, Innodisk strengthens its offering with software tools, including IQ Studio, an open-source developer portal on GitHub that provides BSPs, reference code, benchmark tools, and a dedicated community space for developers. These resources help accelerate prototyping, testing, and system integration. In addition, Innodisk's cloud-based management platform, iCAP, further enhances remote device and AI model management across distributed edge environments. Built on the compact COM-HPC Mini form factor under the latest PICMG specification, the EXMP-Q911 offers next-generation expandability beyond COM Express Mini and streamlines OEM integration with faster development cycles. As a deployment-ready module, it can be directly embedded into customers' self-designed carrier boards for seamless system integration and reduced development effort. For a fully optimized integration experience, the module delivers even greater capability when paired with Innodisk's carrier boards and pre-validated peripherals, including MIPI and GMSL embedded cameras with fully ported drivers for VLM and computer-vision AI, along with modular M.2 expansion cards for networking, storage, and industrial I/O. This combination provides a ready-made solution for streamlined deployment. "With Innodisk's new AI on Dragonwing series, we're making advanced edge intelligence more accessible and scalable for industrial customers," stated Anand Venkatesan, Senior Director, Product Management and Head of Industrial Processors, Qualcomm Technologies, Inc. "By pairing Qualcomm Dragonwing™ SoCs with Innodisk's in‑house software and peripherals, OEMs can accelerate development and deploy with the performance, efficiency, and reliability they need—today and over the long term." Looking ahead, Innodisk and Qualcomm Technologies will strengthen their collaboration across reference designs, demo kits, and go-to-market initiatives. The product portfolio will also incorporate Qualcomm's Dragonwing™ IQX and IQ8 SoCs and future ARM platforms for industrial automation, defect detection, AGV/AMR, smart city applications, and a wide range of vertical markets. [1] Testing conducted using the YOLOv10n model across 10 concurrent video streams both at 30W.  

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 53 加入收藏 :
AP Memory Broadens S-SiCap™ Technology Deployment to Support Evolving AI and HPC Needs

HSINCHU, Dec. 18, 2025 /PRNewswire/ -- AP Memory, a leading global design company providing customized memory solutions, today announced further advancements in its S-SiCapTM (Stack Silicon Capacitor) product line to address the increasing integration demands of AI servers and high-performance computing (HPC) systems. The S-SiCapTM portfolio includes two product categories —discrete silicon capacitors and interposers with silicon capacitors — designed to support different system architectures and diverse application requirements. The discrete silicon capacitors, S-SiCapTM Gen4, achieves a capacitance density of 3.8 μF/mm², an increase of more than 50% over the previous generation. To meet the growing demand for higher performance and power efficiency in AI servers and HPC systems, S-SiCapTM Gen4 is the first to adopt embedded substrate packaging and is currently in the sampling and process validation stage. Mass production will be introduced progressively starting in 2026. Meanwhile, the S-SiCapTM Interposer utilizes a silicon wafer as its substrate, embedding high-density silicon capacitors within the interposer. This significantly enhances signal integrity and power stability for high-speed I/O applications such as die-to-die, SerDes, and HBM. In collaboration with supply-chain partners, AP Memory has introduced a reticle-stitching technology to enlarge interposer die area allowing more IC chiplets to meet the growing demand for higher-integration advanced packaging solutions. The S-SiCapTM Interposer has completed customer packaging and reliability validation, entering four-reticle mass production at the end of Q3'25. Additional development projects are currently underway. Ivan Hong, President of AP Memory, stated, "As AI and HPC applications continue to evolve rapidly, the industry is facing increasingly stringent requirements for power integrity and high-speed signal transmission. Through the S-SiCapTM product line, AP Memory delivers silicon capacitor technology in both discrete components and interposer-integrated forms, providing high performance, high integration, and design flexibility to meet the demands of next-generation AI and HPC systems." Looking ahead, AP Memory is actively developing silicon capacitor solutions for organic interposer architectures, further expanding its product portfolio. About AP Memory AP Memory (TWSE: 6531) is a global fabless design semiconductor company specializing in customized memory design and IP solutions.  Products include IoT memory (IoTRAMTM), AI memory solutions (VHMTM), and silicon capacitors (S-SiCapTM). With strong R&D capabilities, AP Memory is committed to providing high performance, low-power and innovative customized products and solutions for applications such as mobile communication, wearable, IoT, high-end mobile application, high-performance computing, and edge computing. For more information, please visit www.apmemory.com.

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 253 加入收藏 :
Supermicro Announces Intelligent In-Store Retail Solutions in Collaboration with a Broad Range of Industry Partners

Innovative technologies enable retailers to implement intelligent stores at scale to deliver smarter, more responsive shopping experiences Industry partners to display production-ready AI solutions for loss prevention, digital twins, AI agents, customer analytics, and more SAN JOSE, Calif. and NEW YORK, Jan. 11, 2026 /PRNewswire/ -- Retail's Big Show -- Super Micro Computer, Inc. (SMCI), a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, today announced collaboration with technology partners for AI-powered intelligent in-store retail solutions designed to meet increasing customer expectations with scalability, improved productivity and increased profitability. Edge AI Infrastructure Solutions for Intelligent Retail Stores "AI is reshaping shopping experiences, enabling real-time analysis of video and other data to give retailers actionable insights to optimizing staff efficiency, reducing shrinkage, increasing profits, and avoiding stock-outs," said Charles Liang, president and CEO of Supermicro. "By combining Supermicro's complete and scalable AI platforms with NVIDIA RTX PRO accelerated computing solutions, we're enabling retailers to build intelligent stores that maximize the benefits of AI-driven applications." According to the latest NVIDIA State of AI in Retail & CPG report, 89% of respondents reported that AI is helping to increase annual revenue, while 95% said it is helping decrease annual costs, highlighting the measurable business impact retailers are already achieving because of AI1. For more information, please visit https://www.supermicro.com/en/products/edge/servers Supermicro Complete and Scalable Edge AI Infrastructure Retail-centric AI applications require sub-second responsiveness which is only possible, at scale, if data is processed directly at the edge. Supermicro's Edge AI infrastructure enables the deployment of solutions in distributed environments, such as intelligent retail stores and supply chains, to provide a complete solution. Deploying at the edge presents a myriad of unique challenges in comparison to the data center. Supermicro's broad Edge AI portfolio of products helps customers and partners overcome these challenges while striking the right balance between performance and ROI. For deployments in harsh environments where conditioned space is not available, the fanless E103 series of products bring AI processing power where it couldn't go before. Supermicro also offers a fanned small form factor E300 series systems with AI capabilities. When there is a demanding AI workload at the edge, customers and partners can turn to numerous systems capable of holding the latest discrete GPUs for AI acceleration. Ranging from 1U short-depth to larger 4U form factors, solutions can be right-sized to meet the customer's needs by enabling the latest generation of NVIDIA RTX PRO Blackwell series. Supermicro Intelligent Store Partners Supermicro is collaborating with ecosystem partners including Everseen, Kinetic Vision, ALLSIDES, LiveX, WobotAI, and Aible, to create intelligent stores that positively impact day-to-day retail operations as well as longer term supply chain management. Everseen's Evercheck solution uses Vision AI to help reduce shrink and improve staff productivity and customer experiences. Built on the Everseen Vision AI platform, Evercheck detects, and deters recovery of unwanted behaviors at checkout, helping retailers recover losses and streamline front-of-store operations. "Everseen has spent years working alongside some of the world's largest retailers, understanding the realities of the store floor and solving loss challenges where they actually happen," said Joe White, CEO of Everseen. "By partnering with Supermicro and leveraging NVIDIA-accelerated computing, Evercheck delivers real-time computer vision at the edge - transforming store activity into intelligence retailers can act on immediately." Wobot AI, focused on building Video AI Agents for the physical world, will be demonstrating how the cameras retailers already own can be turned into systems that continuously observe, learn, and produce usable insight. By converting ordinary CCTV infrastructure into autonomous agents that recognize patterns, identify friction, and surface decisions, Wobot's AI Agents enable retailers to improve day-to-day operations with practical and measurable outcomes. "By working with Supermicro and NVIDIA at the edge, we're able to deploy Video AI Agents in a way that's scalable, reliable, and focused on real-time operational insight—not experimentation," said Will Kelso, President, Revenue & Growth at Wobot AI. LiveX AI - "Retail is entering an era where AI agents become the default interaction layer between brands and customers," said Jerry Li, Co-Founder and CEO of LiveX AI. "With NVIDIA's accelerated AI and Supermicro's edge infrastructure, we can deploy a helpful, human-like AI agent directly in physical spaces—such as kiosks or holograms—bringing the speed, intelligence, and continuity of digital experiences into brick-and-mortar environments. This collaboration makes AI agents usable, in real time, where customers actually are." Kinetic Vision and ALLSIDES are bundling their expertise for a True Digital Twin solution designed to develop, test, and optimize supply chain processes, checkout stations, and other complex systems. "The combination of Supermicro's high reliability optimized infrastructure and NVIDIA's accelerated computing stack gives Kinetic Vision the foundation to innovate at speed. Together, we are helping retailers move from experimentation to production-ready AI solutions that drive measurable operational and customer experience gains," says Jeremy Jarratt, Kinetic Vision CEO. Franz Tschimben, CEO at ALLSIDES, adds: "Building on the combined strengths of NVIDIA and Supermicro, we deliver a high-fidelity 3D digital twin data layer for AI training that enables retailers to power applications across the entire retail value chain — from training robots with physical AI, to production and production planning, virtual store layouts and consumer feedback systems, and e-commerce integrations — helping drive higher conversion rates, faster decision-making, and greater operational efficiency." Supermicro will also feature Superb AI's retail-focused VSS solution, combining Superb AI's proprietary VLM with NVIDIA AI Blueprint components to enable subjective reasoning capabilities, natural-language search, automated incident summarization, and customer behavior insights across store camera networks. Aible will highlight its automated agent solution, which analyzes vast amounts of data across millions of patterns to explain changes in retail KPIs, such as the average purchase amount or number of purchases. Aible will also demonstrate how the latest NVIDIA Retail blueprints can be incorporated into agentic solutions that automatically customize customer experiences and optimize retail operations at scale. Arijit Sengupta, CEO of Aible, adds, "Today's market and labor conditions are constantly changing. Only autonomous agents can understand and adjust to these ever-changing patterns of customer behavior, inventory costs and supply, and labor access at scale. Working with NVIDIA and Supermicro, Aible brings autonomous agents subject to business user review to the retail edge." These intelligent retail solutions will be demonstrated by Supermicro and its partners at NRF: Retail's Big Show in New York City from January 11-13. To learn more about AI-powered retail applications or Supermicro's AI Infrastructure solutions, visit Supermicro booth #5281 and attend its speaking session featuring PepsiCo and Kinetic Vision. For more details, please visit www.supermicro.com/NRF 1Source: "NVIDIA State of AI in Retail and CPG" 2026 About Super Micro Computer, Inc. Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Asia, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling). Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc. All other brands, names, and trademarks are the property of their respective owners.  

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 201 加入收藏 :
AMD攜手合作夥伴於CES 2026闡述「AI Everywhere, for Everyone」的未來願景

·           AMD搶先揭示“Helios”機架級平台,此平台搭載AMD Instinct MI455X GPU與AMD EPYC “Venice” CPU,專為進階AI工作負載所設計,為yotta-scale等級AI基礎設施擘劃藍圖。 ·           AMD擴展其AI產品組合,推出適用於企業部署的AMD Instinct MI440X GPU,並預覽新一代Instinct MI500系列GPU。 ·           AMD推出適用於AI PC與嵌入式應用的全新AMD Ryzen AI平台;揭示Ryzen AI Halo開發者平台。 ·           AMD宣布承諾投入1.5億美元,將AI帶入更多教室與社區。 台北—2026年1月6日—AMD (NASDAQ: AMD)董事長暨執行長蘇姿丰博士今日於CES 2026開幕主題演講中闡述,AMD廣泛的AI產品組合與深度的跨產業合作,正加速將AI潛力轉化為真實世界影響力。 AMD主題演講展示了從資料中心到邊緣的重大進展,OpenAI、Luma AI、Liquid AI、World Labs、Blue Origin、Generative Bionics、AstraZeneca、Absci與Illumina等合作夥伴,皆詳述其如何運用AMD技術來推動AI突破。 AMD董事長暨執行長蘇姿丰博士表示:「在CES展會上,我們攜手合作夥伴一同展示當產業齊心協力推動『AI Everywhere, for Everyone』的願景時,所能實現的無限可能。隨著AI採用的加速,我們正邁向yotta-scale等級運算的時代,這股動能來自訓練與推論空前的成長。AMD正透過領先業界的端對端技術、開放式平台以及與整個產業體系合作夥伴的深度共同創新,為AI的下一階段奠定運算基礎。」 Yotta-scale等級運算的藍圖 運算基礎設施是AI的基石,其加快的採用速度正推動全球運算能力從目前的100 zettaflops,在未來5年內擴展至預計超過10 yottaflops的空前規模。建構yotta-scale等級AI基礎設施所需的不僅仰賴原始效能,更需要一套開放且模組化的機架設計,能夠隨著產品世代演進,結合領先的運算引擎與高速網路,將數千個加速器連接成單一統一的系統。 AMD “Helios”機架級平台是yotta-scale等級基礎設施的藍圖,單一機架即可提供高達3 AI exaflops的效能,旨在為兆級參數模型訓練打造,提供最大頻寬和能源效率。“Helios”平台由AMD Instinct™ MI455X加速器、AMD EPYC™ “Venice” CPU及適用於向外擴展(scale-out)網路的AMD Pensando™ “Vulcano” NIC提供支援,所有元件皆透過開放式AMD ROCm™軟體產業體系進行整合。 在CES展會上,AMD搶先揭示“Helios”平台,並首次公開完整的AMD Instinct MI400系列加速器產品組合,同時預覽了新一代MI500系列GPU。 MI400系列的最新成員是AMD Instinct MI440X GPU,專為企業內部AI部署所設計。MI440X採用精巧的8 GPU尺寸,為可擴展的訓練、微調和推論工作負載提供強大動能,並可無縫整合到現有基礎設施中。 MI440X延續近期發表的AMD Instinct MI430X GPU優勢,旨在為高精準科學、高效能運算(HPC)與主權AI工作負載提供領先的效能與混合運算能力。MI430X GPU將為全球多座AI工廠超級電腦提供運算動能,包括橡樹嶺國家實驗室的Discovery以及法國首座exascale等級超級電腦Alice Recoque系統。 AMD亦於CES上揭露新一代AMD Instinct MI500 GPU的更多細節,該系列預計於2027年推出。MI500系列預期將較2023年推出的AMD Instinct MI300X GPU帶來高達1,000倍的AI效能提升註1。憑藉新一代AMD CDNA™ 6架構、先進的2奈米製程技術與頂尖的HBM4E記憶體,MI500 GPU將在各個層面展現領先地位。 全面實現AI PC體驗 AI正成為PC體驗的基礎核心,數十億使用者將直接與AI互動,無論是在裝置本地端或透過雲端。在CES展會上,AMD推出新產品,擴展AMD的AI PC產品組合,並深化了整個產業體系中的開發者支援。 新一代AMD Ryzen™ AI 400系列及Ryzen AI PRO 400系列平台提供60 TOPS NPU註2、頂尖效率以及完整的AMD ROCm平台支援,實現從雲端到客戶端無縫的AI擴展。首批系統將於2026年1月出貨,並於2026年第1季擴大OEM合作夥伴的供貨。 AMD亦擴展其突破性的裝置端AI運算產品,推出Ryzen AI Max+ 392與Ryzen AI Max+ 388,支援高達1,280億參數模型並搭配128GB統一記憶體。這些平台可在高階輕薄筆記型電腦和小型(SFF)桌上型電腦中實現先進的本地推論、內容創作工作流程與令人驚豔的遊戲體驗。 對開發人員而言,Ryzen AI Halo開發者平台將強大的AI開發能力帶到精巧的SFF桌上型電腦中,透過高效能Ryzen AI Max+系列處理器,提供每元每秒token的領先表現。Ryzen AI Halo預計將於2026年第2季上市。 AI正轉變實體世界 AMD推出Ryzen AI嵌入式處理器,為全新的嵌入式x86處理器產品組合,旨在為邊緣AI驅動型應用提供動能。從汽車數位座艙與智慧醫療,到用於人形機器人等自動化系統的物理AI,全新P100與X100系列處理器為最受限制的嵌入式系統提供高效能、高效率的AI運算能力。 推進Genesis計畫與AI創新的未來 蘇姿丰博士與白宮科技政策辦公室主任Michael Kratsios共同登台,討論AMD在美國政府「Genesis計畫」中所扮演的角色。這項宏大的公私協力科技倡議旨在確保美國在AI技術領域的領導地位,並在未來數年內形塑科學探索和全球競爭力。Genesis計畫包括橡樹嶺國家實驗室近期宣布的兩台搭載AMD核心的AI超級電腦:Lux和Discovery。 Michael Kratsios亦強調白宮努力號召各組織承諾投入資源,擴大AI教育的普及,為學生提供更多實作機會,學習和建構AI。為響應這項承諾,AMD宣布投入1.5億美元,將AI帶入更多教室和社區。 主題演講最後表彰了超過15,000名學生創新者,他們參與了AMD與Hack Club合作舉辦的AMD AI Robotics Hackathon。 相關資源 •           請至此網站觀看AMD CES 2026開幕主題演講重播 •           更多資訊請參考AMD CES 2026媒體中心 關於AMD AMD (NASDAQ:AMD)致力於推動高效能與AI運算的創新,解決全球最重要的挑戰。如今,AMD的技術涵蓋雲端與AI基礎設施、嵌入式系統、AI PC以及遊戲領域,驅動數十億次應用體驗。憑藉廣泛對AI優化的CPU、GPU、網路技術及軟體的產品組合,AMD提供效能卓越、可擴展的全方位AI解決方案,滿足智慧運算新時代的需求。欲瞭解更多資訊,請瀏覽AMD網站。 免責聲明 本新聞稿包含有關Advanced Micro Devices, Inc(AMD)的前瞻性陳述,例如AMD產品的特性、功能、效能、可用性、時程與預期效益,包括AMD “Helios”機架級平台、AMD Instinct™ MI400系列、AMD Instinct™ MI500系列、AMD Ryzen™ AI 400系列、AMD Ryzen™ AI PRO 400系列、 AMD Ryzen™ AI Max+ 392、AMD Ryzen™ AI Max+ 388、AMD Ryzen™ AI Halo開發者平台、AMD Ryzen™ AI嵌入式P100系列、以及AMD Ryzen™ AI嵌入式X100系列處理器;產業體系合作夥伴合作的預期收益;預期的未來AI需求;以及AMD在Genesis Mission中的角色及收益,這些陳述皆基於1995年《私人證券訴訟改革法案》(U.S. Private Securities Litigation Reform Act)的「安全港」(Safe Harbor)條款制定。這些前瞻性聲明含有像「將會」、「可能」、「預期」、「相信」、「計劃」、「打算」、「估計」,或這些字詞和短語的其它類似詞彙。投資者應注意本資料中的前瞻性陳述僅根據本文公佈當時的見解、假設以及預期,僅反映本文發布時的情況,且涉及到許多風險與不確定因素,可能會導致實際結果與預期存在重大差異。這類陳述受到特定已知與未知風險與不確定因素所影響,其中許多因素難以預測且大多非AMD所能掌控,並可能響應實際結果與其他未來事件和文中陳述有所出入,或是和前瞻性陳述資訊與陳述的暗示或預期狀況有所不同。可能導致實際結果和當前預期有所出入的實質因素包括但不限於:AMD產品銷售的競爭市場;半導體產業的周期性;AMD產品銷售行業的市場狀況;AMD能夠及時推出具有預期功能和效能水準的產品;失去重要客戶;經濟和市場的不確定性;季度和季節性銷售模式;AMD充分保護其技術或其他知識產權的能力;不利的貨幣匯率波動;第三方廠商能及時製造足夠數量AMD的產品、或使用競爭對手的技術;基本設備、材料、載板或製造過程的可用性;達到AMD產品預期製造良率的能力;AMD的半客製化SoC產品產生營收的能力;潛在的安全漏洞;潛在的安全事件,包括IT中斷、數據丟失、數據洩露和網路攻擊;有關AMD產品訂購和發貨的不確定性;AMD依賴第三方廠商知識產權來設計和推出新產品;AMD依賴第三方廠商來設計、製造和供應主機板、軟體、記憶體和其他電腦平台零組件;AMD依賴Microsoft和其他軟體供應商的支持來設計和開發可在AMD產品上運行的軟體;AMD依賴第三方分銷商和外接合作夥伴;修改或中斷AMD內部業務流程和資訊系統的影響;AMD產品與部分或全部行業標準軟體和硬體的兼容性;缺陷產品所產生的有關費用;AMD供應鏈的效率;AMD依靠第三方供應鏈物流功能的能力;AMD有效控制其產品在灰色市場上銷售的能力;氣候變遷對AMD業務的影響;政府行動和法規的影響,例如出口法規、進口關稅和貿易保護措施以及許可證要求;AMD實現遞延所得稅資產的能力;潛在的稅收負債;當前和將來的索賠和訴訟;環境法律,與衝突礦物有關的規定以及其他法律或法規的影響;政府、投資者、客戶和其他7位利害關係人對企業責任事務不斷變化的期望;與負責任地使用人工智慧相關的問題;管理AMD票據、賽靈思票據擔保、循環信貸協議所施加的限制;收購、合資和/或策略性投資對AMD業務的影響以及AMD整合收購業務(包括ZT Systems)的能力;合併後公司資產減損的影響;政治,法律,經濟風險和自然災害;技術許可購買的未來減損;AMD吸引和留住主要員工的能力;AMD的股價波動。呼籲投資者詳閱公司呈交美國證管會各項財報中提及的風險與不確定因素,其中包括但不限於AMD最近的Form 10-K和10-Q報告。 註1:根據AMD效能實驗室於2025年12月的工程預測,估算採用AMD Instinct™ MI500系列GPU的AI機架對比AMD Instinct MI300X平台的理論峰值精度效能。產品上市後,結果可能會有所變動。 註2:TOPS是指AMD Ryzen處理器在最佳情境下可能達到的最高操作次數,但不代表典型情況。TOPS可能會因多種因素而異,包括特定的系統配置、AI模型與軟體版本。GD-243

文章來源 : 世紀奧美 發表時間 : 瀏覽次數 : 1563 加入收藏 :
Supermicro宣布支援即將推出的NVIDIA Vera Rubin NVL72與HGX Rubin NVL8,並擴大機櫃製造產能,提供更佳的液冷AI解決方案

Supermicro透過其資料中心建構組件解決方案(Data Center Building Block Solutions®,DCBBS)、先進的直接液冷(DLC)技術,以及在美國的內部設計與製造產能,加速新一代液冷AI基礎設施的部署時程 加州聖荷西2026年1月7日 /美通社/ -- Super Micro Computer, Inc.(NASDAQ:SMCI)作為AI、雲端、儲存和5G/邊緣領域的全方位IT解決方案供應商,宣布擴大製造產能、強化液冷技術,並與NVIDIA展開合作,推動NVIDIA Vera Rubin與Rubin平台最佳化資料中心級解決方案率先上市。透過加速與NVIDIA的開發與合作,Supermicro能更有優勢地迅速部署旗艦級NVIDIA Vera Rubin NVL72與NVIDIA HGX™ Rubin NVL8系統。而Supermicro經認證的資料中心建構組件解決方案(Data Center Building Block Solutions,DCBBS)可實現優化式製造程序、高度客製化方案,以及更快的部署時程,助力客戶在新一代AI基礎設施市場內取得關鍵性競爭優勢。 Vera Rubin Cluster Supermicro總裁暨執行長梁見後表示:「Supermicro與NVIDIA的長期合作,以及我們具高彈性的建構組件(Building Block)解決方案,使我們能更快速地將最先進的AI平台推向市場。此外,我們也透過更高的製造產能和領先業界的液冷技術,以空前的速度、效率與穩定性,助力超大規模運算設施與企業大規模式地部署NVIDIA Vera Rubin與Rubin平台基礎設施。」 了解更多:https://www.supermicro.com/en/accelerators/nvidia/vera-rubin 旗艦級產品: NVIDIA Vera Rubin NVL72 SuperCluster:此款頂級機櫃式系統搭載了72個NVIDIA Rubin GPU與36個NVIDIA Vera CPU,以及NVIDIA ConnectX®-9 SuperNIC和NVIDIA BlueField®-4 DPU,並透過NVIDIA NVLink 6建構出一個互連式平台。同時,此機櫃式系統也可藉由NVIDIA Quantum-X800 InfiniBand與NVIDIA Spectrum-X Ethernet進行水平式擴充,近一步推動AI產業轉型。NVIDIA Vera Rubin NVL72 SuperCluster可提供3.6 exaflops NVFP4算力、1.4 PB/s HBM4頻寬,以及75 TB的高速記憶體。此平台是基於第三代NVIDIA MGX機櫃架構,具備卓越的可維護性、穩固性與可用性,而Supermicro也將該平台與更佳的資料中心級液冷技術進行整合,包括機櫃列間式(In-Row)冷卻液分配單元(CDU),能實現可擴充式的溫水冷卻運行,進而最小化電力消耗量與用水量,同時最大化運算密度與效率。 2U液冷NVIDIA HGX Rubin NVL8系統:這款緊湊型8-GPU系統針對AI與HPC工作負載進行了最佳化,可為大規模企業型智慧化應用中提供空前的效能與效率。此系統可提供400 petaflops NVFP4算力、176 TB/s HBM4頻寬、28.8 TB/s NVLink傳輸頻寬,以及1600 Gb/s NVIDIA ConnectX-9 SuperNIC網路效能。Supermicro提供完善的機櫃級設計,以及最大化的部署彈性與多樣化的配置方案,包括能支援新一代Intel® Xeon®或AMD EPYC™等旗艦級x86 CPU。此外,此系統也可依需求搭配高密度2U匯流排(Busbar)設計,並結合Supermicro領先業界的先進直接液冷(DLC)技術,實現最佳化機櫃整合。 NVIDIA Vera Rubin平台的重點規格與性能: NVIDIA NVLink™ 6:此項高速互連技術可實現空前的GPU對GPU與CPU對GPU通訊效能,能適用於大規模混合專家(Mixture-of-Experts)模型的訓練與推論。 NVIDIA Vera CPU:由NVIDIA設計的客製化Arm核心,其效能為前一代核心的2倍。此CPU具有Spatial Multithreading技術(88核心/176執行緒)、1.2 TB/s LPDDR5X記憶體頻寬(容量提升3倍),和1.8 TB/s NVLink-C2C對GPU的連接頻寬(為前一代的2倍)。 第三代Transformer引擎:可針對長上下文(Long-Context)工作負載和擴大現今AI規模所需的Narrow-Precision運算,實現最佳化加速。 第三代機密運算(Confidential Computing):透過統一標準化的GPU級可信任執行環境(Trusted Execution Environment,TEE),提供機櫃級規模的機密運算性能,確保模型、資料與提示詞(Prompt)受到完善的保護與隔離。 第二代RAS引擎:提供更高的穩定性、可用性與可維護性,包括可在不停機(Downtime)的情況下進行即時性系統健康狀態檢測。 NVIDIA Vera Rubin平台也透過新推出的NVIDIA Spectrum-X Ethernet Photonics網路技術提供強大的性能優勢。此技術採用Spectrum-6 Ethernet ASIC設計,並基於台積電3奈米製程,以及200G SerDes共同封裝光學(Co-Packaged Optics)與完全共享的緩衝架構,可提供102.4 Tb/s的交換器效能。與傳統可插拔光學(Pluggable Optics)相比,該技術可實現5倍的能效、10倍的穩定性,以及5倍的應用程式運行時間。目前可提供的機型包括液冷式SN6800(具有409.6 Tb/s CPO和512個800G連接埠)、SN6810(具有102.4 Tb/s CPO和128個800G連接埠),以及SN6600(具有可插拔式設計和128個800G連接埠,並可搭配氣冷或液冷式散熱配置)。 同時,Supermicro亦提供基於Petascale全快閃儲存伺服器與JBOF系統的儲存解決方案。此項解決方案可支援NVIDIA BlueField-4 DPU,以執行多種類型的資料管理應用。 Supermicro在擴大製造產能及強化完善端到端液冷技術堆疊方面的策略,旨在最佳化完整液冷式NVIDIA Vera Rubin與Rubin平台的製造與部署。這些技術與模組化資料中心建構組件解決方案架構進行結合後,可透過快速的結構配置、嚴謹的驗證,以及高密度平台的無縫式擴充,加速部署與啟動上線時程,進一步助力客戶取得領先市場的優勢。 關於Super Micro Computer, Inc. Supermicro(納斯達克股票代碼:SMCI)為應用最佳化全方位IT解決方案的全球領導者。Supermicro的成立據點及營運中心位於美國加州聖荷西,致力為企業、雲端、AI和5G電信/邊緣IT基礎架構提供領先市場的創新技術。我們是全方位IT解決方案製造商,提供伺服器、AI、儲存、物聯網、交換器系統、軟體及支援服務。Supermicro的主機板、電源和機箱設計專業進一步推動了我們的發展與產品生產,為全球客戶實現了從雲端到邊緣的下一代創新。我們的產品皆由企業內部團隊設計及製造(在美國、亞洲及荷蘭),透過全球化營運實現極佳的規模與效率,並藉營運最佳化降低總體擁有成本(TCO),以及經由綠色運算技術減少環境衝擊。屢獲殊榮的Server Building Block Solutions®產品組合,讓客戶可以自由選擇這些具高度彈性、可重複使用且極為多元的建構式組合系統,我們支援各種外形尺寸、處理器、記憶體、GPU、儲存、網路、電源和散熱解決方案(空調、自然氣冷或液冷),因此能為客戶的工作負載與應用提供最佳的效能。 Supermicro、Server Building Block Solutions和We Keep IT Green皆為Super Micro Computer, Inc. 的商標和/或註冊商標。 所有其他品牌、名稱和商標皆為其各自所有者之財產。  

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 324 加入收藏 :
Supermicro 宣布推出 NVIDIA Vera Rubin 與 HGX Rubin 系統,推動新一代液冷 AI 資料中心

Super Micro Computer, Inc.(NASDAQ:SMCI)作為AI、雲端、儲存和5G/邊緣領域的全方位IT解決方案供應商,宣布擴大製造產能、強化液冷技術,並與NVIDIA展開合作,推動NVIDIA Vera Rubin與Rubin平台最佳化資料中心級解決方案率先上市。透過加速與NVIDIA的開發與合作,Supermicro能更有優勢地迅速部署旗艦級NVIDIA Vera Rubin NVL72與NVIDIA HGX™ Rubin NVL8系統。而Supermicro經認證的資料中心建構組件解決方案(Data Center Building Block Solutions,DCBBS)可實現優化式製造程序、高度客製化方案,以及更快的部署時程,助力客戶在新一代AI基礎設施市場內取得關鍵性競爭優勢。Supermicro總裁暨執行長梁見後表示:「Supermicro與NVIDIA的長期合作,以及我們具高彈性的建構組件(Building Block)解決方案,使我們能更快速地將最先進的AI平台推向市場。此外,我們也透過更高的製造產能和領先業界的液冷技術,以空前的速度、效率與穩定性,助力超大規模運算設施與企業大規模式地部署NVIDIA Vera Rubin與Rubin平台基礎設施。」旗艦級產品: l   NVIDIA Vera Rubin NVL72 SuperCluster:此款頂級機櫃式系統搭載了72個NVIDIA Rubin GPU與36個NVIDIA Vera CPU,以及NVIDIA ConnectX®-9 SuperNIC和NVIDIA BlueField®-4 DPU,並透過NVIDIA NVLink 6建構出一個互連式平台。同時,此機櫃式系統也可藉由NVIDIA Quantum-X800 InfiniBand與NVIDIA Spectrum-X Ethernet進行水平式擴充,近一步推動AI產業轉型。NVIDIA Vera Rubin NVL72 SuperCluster可提供3.6 exaflops NVFP4算力、1.4 PB/s HBM4頻寬,以及75 TB的高速記憶體。此平台是基於第三代NVIDIA MGX機櫃架構,具備卓越的可維護性、穩固性與可用性,而Supermicro也將該平台與更佳的資料中心級液冷技術進行整合,包括機櫃列間式(In-Row)冷卻液分配單元(CDU),能實現可擴充式的溫水冷卻運行,進而最小化電力消耗量與用水量,同時最大化運算密度與效率。 l   2U液冷NVIDIA HGX Rubin NVL8系統:這款緊湊型8-GPU系統針對AI與HPC工作負載進行了最佳化,可為大規模企業型智慧化應用中提供空前的效能與效率。此系統可提供400 petaflops NVFP4算力、176 TB/s HBM4頻寬、28.8 TB/s NVLink傳輸頻寬,以及1600 Gb/s NVIDIA ConnectX-9 SuperNIC網路效能。Supermicro提供完善的機櫃級設計,以及最大化的部署彈性與多樣化的配置方案,包括能支援新一代Intel® Xeon®或AMD EPYC™等旗艦級x86 CPU。此外,此系統也可依需求搭配高密度2U匯流排(Busbar)設計,並結合Supermicro領先業界的先進直接液冷(DLC)技術,實現最佳化機櫃整合。   NVIDIA Vera Rubin平台的重點規格與性能: l   NVIDIA NVLink™ 6:此項高速互連技術可實現空前的GPU對GPU與CPU對GPU通訊效能,能適用於大規模混合專家(Mixture-of-Experts)模型的訓練與推論。 l   NVIDIA Vera CPU:由NVIDIA設計的客製化Arm核心,其效能為前一代核心的2倍。此CPU具有Spatial Multithreading技術(88核心/176執行緒)、1.2 TB/s LPDDR5X記憶體頻寬(容量提升3倍),和1.8 TB/s NVLink-C2C對GPU的連接頻寬(為前一代的2倍)。 l   第三代Transformer引擎:可針對長上下文(Long-Context)工作負載和擴大現今AI規模所需的Narrow-Precision運算,實現最佳化加速。 l   第三代機密運算(Confidential Computing):透過統一標準化的GPU級可信任執行環境(Trusted Execution Environment,TEE),提供機櫃級規模的機密運算性能,確保模型、資料與提示詞(Prompt)受到完善的保護與隔離。 l   第二代RAS引擎:提供更高的穩定性、可用性與可維護性,包括可在不停機(Downtime)的情況下進行即時性系統健康狀態檢測。 NVIDIA Vera Rubin平台也透過新推出的NVIDIA Spectrum-X Ethernet Photonics網路技術提供強大的性能優勢。此技術採用Spectrum-6 Ethernet ASIC設計,並基於台積電3奈米製程,以及200G SerDes共同封裝光學(Co-Packaged Optics)與完全共享的緩衝架構,可提供102.4 Tb/s的交換器效能。與傳統可插拔光學(Pluggable Optics)相比,該技術可實現5倍的能效、10倍的穩定性,以及5倍的應用程式運行時間。目前可提供的機型包括液冷式SN6800(具有409.6 Tb/s CPO和512個800G連接埠)、SN6810(具有102.4 Tb/s CPO和128個800G連接埠),以及SN6600(具有可插拔式設計和128個800G連接埠,並可搭配氣冷或液冷式散熱配置)。 同時,Supermicro亦提供基於Petascale全快閃儲存伺服器與JBOF系統的儲存解決方案。此項解決方案可支援NVIDIA BlueField-4 DPU,以執行多種類型的資料管理應用。Supermicro在擴大製造產能及強化完善端到端液冷技術堆疊方面的策略,旨在最佳化完整液冷式NVIDIA Vera Rubin與Rubin平台的製造與部署。這些技術與模組化資料中心建構組件解決方案架構進行結合後,可透過快速的結構配置、嚴謹的驗證,以及高密度平台的無縫式擴充,加速部署與啟動上線時程,進一步助力客戶取得領先市場的優勢。了解更多:https://www.supermicro.com/en/accelerators/nvidia/vera-rubin

文章來源 : 香港商霍夫曼公關顧問股份有限公司 發表時間 : 瀏覽次數 : 1787 加入收藏 :
2026 年 1 月 15 日 (星期四) 農曆十一月廿七日
首 頁 我的收藏 搜 尋 新聞發佈