本網站使用瀏覽器紀錄 (Cookies) 來提供您最好的使用體驗,我們使用的 Cookie 也包括了第三方 Cookie。相關資訊請訪問我們的隱私權與 Cookie 政策。如果您選擇繼續瀏覽或關閉這個提示,便表示您已接受我們的網站使用條款。 關閉
STOCKHOLM, Oct. 14, 2025 /PRNewswire/ -- Strategic highlights – operational excellence and enhanced financial flexibility Strong commercial momentum with significant customer agreements including in India, Japan and the UK. Operational excellence and cost efficiency actions driving gross margins to strong sustainable levels. 5G Open RAN-ready portfolio breadth and technology leadership position reaffirmed by Gartner and Omdia. Financial highlights – further profitability growth Organic sales declined by -2%, with growth in three out of four market areas. Reported sales were SEK 56.2 (61.8) b., with an FX impact of SEK -4.2 b. Adjusted[1] gross income decreased to SEK 27.0 (28.6) b. as currency headwinds offset strong operational execution. Reported gross income was SEK 26.8 (28.2) b. Adjusted[1] gross margin was 48.1% (46.3%) driven by improvements in Networks and Cloud Software and Services. Reported gross margin was 47.6% (45.6%). Adjusted[1] EBITA was SEK 15.8 (7.8) b. with a 28.1% (12.6%) margin, including a SEK 7.6 b. capital gain benefit from the divestment of iconectiv. Reported EBITA was SEK 15.5 (6.2) b. with a 27.6% (10.0%) margin. Net income was SEK 11.3 (3.9) b. including a benefit from the capital gain. EPS diluted was SEK 3.33 (1.14). Free cash flow before M&A was SEK 6.6 (12.9) b. Net cash increased to SEK 51.9 b. Börje Ekholm, President and CEO, said: "In Q3, we established margins at a new long-term level following strong operational execution over the past few years. Cloud Software and Services sales grew 9%*, driven by strong growth in core networks. Our solid progress on technology initiatives continues. Gartner and Omdia reconfirmed our 5G solutions are industry leading. Our Open RAN-ready portfolio includes an AI native, future proof software architecture which is hardware agnostic. The portfolio integrates with third-party radios and supports Ericsson silicon and third-party CPU/GPUs. Looking ahead, we expect Enterprise organic sales to stabilize in Q4 and the RAN market to remain broadly stable. Solid recurring cash flow and the iconectiv sale contributed to a strong Q3 cash position, offering scope for increased shareholder distributions. The Board's recommendation on the scale and mechanism for the distribution will be included in the Q4 report for decision at the AGM." SEK b. Q3 2025 Q3 2024 YoY change Q2 2025 QoQchange Jan-Sep2025 Jan-Sep 2024 YoY change Net sales 56.239 61.794 -9 % 56.132 0 % 167.396 174.967 -4 % Organic sales growth * [2] - - -2 % - - - - 0 % Gross income 26.777 28.185 -5 % 26.649 0 % 79.963 76.658 4 % Gross margin [2] 47.6 % 45.6 % - 47.5 % - 47.8 % 43.8 % - EBIT (loss) 15.151 5.774 162 % 6.391 137 % 27.473 -3.6 - EBIT margin [2] 26.9 % 9.3 % - 11.4 % - 16.4 % -2.1 % - EBITA [2] 15.516 6.203 150 % 6.763 129 % 28.931 13.522 114 % EBITA margin [2] 27.6 % 10.0 % - 12.0 % - 17.3 % 7.7 % - Net income (loss) 11.300 3.881 191 % 4.626 144 % 20.143 -4.505 - EPS diluted, SEK 3.33 1.14 192 % 1.37 143 % 5.94 -1.43 - Free cash flow before M&A [2] 6.631 12.944 -49 % 2.581 157 % 11.916 24.210 -51 % Net cash, end of period [2] 51.858 25.534 103 % 36.040 44 % 51.858 25.534 103 % Adjusted financial measures [1][2] Adjusted gross income 27.048 28.609 -5 % 26.959 0 % 80.702 77.670 4 % Adjusted gross margin 48.1 % 46.3 % - 48.0 % - 48.2 % 44.4 % - Adjusted EBIT (loss) 15.454 7.327 111 % 7.047 119 % 28.713 -0.259 - Adjusted EBIT margin 27.5 % 11.9 % - 12.6 % - 17.2 % -0.1 % - Adjusted EBITA 15.819 7.756 104 % 7.419 113 % 30.171 16.908 78 % Adjusted EBITA margin 28.1 % 12.6 % - 13.2 % - 18.0 % 9.7 % - * Sales adjusted for the impact of acquisitions and divestments and effects of foreign currency fluctuations. [1] Adjusted metrics are adjusted to exclude restructuring charges. [2] Non-IFRS financial measures are reconciled at the end of this report to the most directly reconcilable line items in the financial statement. NOTES TO EDITORS You find the complete report with tables in the attached PDF or on www.ericsson.com/investors Video webcast for analysts, investors and journalists President and CEO Börje Ekholm and CFO Lars Sandström will comment on the report and take questions at a live video webcast at 9:00 AM CEST (8:00 AM BST London, 3:00 AM EDT New York). Join the webcast or please go to www.ericsson.com/investors To ask a question: Access dial-in information here The webcast will be available on-demand after the event and can be viewed at www.ericsson.com/investors. FOR FURTHER INFORMATION, PLEASE CONTACT Daniel Morris, Head of Investor Relations Phone: +44 7386657217 E-mail: investor.relations@ericsson.com Additional contacts Stella Medlicott, Senior Vice President, Marketing and Corporate Relations Phone: +46 730 95 65 39 E-mail: media.relations@ericsson.com InvestorsLena Häggblom, Director, Investor Relations Phone: +46 72 593 27 78 E-mail: lena.haggblom@ericsson.com Alan Ganson, Director, Investor Relations Phone: +46 70 267 27 30 E-mail: alan.ganson@ericsson.co Media Ralf Bagner, Head of Media Relations Phone: +46 76 128 47 89 E-mail: ralf.bagner@ericsson.com Media relations Phone: +46 10 719 69 92 E-mail: media.relations@ericsson.com This is information that Telefonaktiebolaget LM Ericsson is obliged to make public pursuant to the EU Market Abuse Regulation. The information was submitted for publication, through the agency of the contact person set out above, at 07:00 CEST on October 14, 2025. This information was brought to you by Cision http://news.cision.com https://news.cision.com/ericsson/r/ericsson-reports-third-quarter-results-2025,c4249501 The following files are available for download: https://mb.cision.com/Main/15448/4249501/3720385.pdf Ericsson Q3 2025 ENG https://mb.cision.com/Public/15448/4249501/b4daaa7dd0442ddb.xlsx Q3-25 tables
ASUS AI POD built on the NVIDIA GB300 NVL72 platform and latest AI Servers XA NB3I-E12 accelerated by the NVIDIA HGX B300 system now shipping for enterprise AI TAIPEI, Oct. 14, 2025 /PRNewswire/ -- ASUS today announced its participation in the 2025 OCP Global Summit, being held from October 13–16 at the San Jose Convention Center, booth #C15. At the event, ASUS unveiled its XA NB3I-E12 series AI servers, based on NVIDIA® HGX B300 system integrated with NVIDIA ConnectX-8 InfiniBand SuperNICs, 5 PCIe® expansion slots, 32 DIMM and 10 NVMe. Designed for enterprises and cloud service providers (CSPs) managing intensive AI workloads, these servers deliver outstanding performance and stability, unlocking the full potential of AI. ASUS Unveils AI Factory and Next-Gen Servers with NVIDIA HGX B300 systems at OCP 2025 Starting this September, ASUS AI POD built on NVIDIA GB300 NVL72 and XA NB3I-E12 servers based on NVIDIA HGX B300 have begun shipping, giving enterprises and cloud service providers early access to cutting-edge AI performance and reliability. Driving AI transformation with ASUS AI Factory ASUS is also showcasing the ASUS AI Factory built on NVIDIA Blackwell architecture. Featured products include ASUS AI POD built on the NVIDIA GB300 NVL72 platform, and the XA NB3I-E12 servers accelerated by the NVIDIA HGX B300 system. These solutions serve as foundational building blocks for enterprise AI factories. The ASUS AI Factory is a comprehensive, end-to-end approach that integrates cutting-edge hardware, optimized software platforms, and professional services to accelerate enterprise AI adoption. It enables organizations to deploy AI workloads from edge devices to large-scale AI supercomputing environments, supporting diverse applications such as generative AI, natural language processing, and predictive analytics. By combining ASUS servers, rack-scale ASUS AI PODs, and high-serviceability designs, the AI Factory reduces deployment complexity, improves operational efficiency, and maximizes computing resources. All these products are on display on-site, giving a firsthand look at how the AI Factory empowers enterprises and cloud service providers to innovate faster, scale reliably, and unlock the full potential of AI across industries — from manufacturing automation to smart city initiatives. This holistic ecosystem ensures seamless integration, flexible deployment, and the scalability required for the rapidly evolving AI landscape. Furthermore, as part of its powerful AI-inference solutions, ASUS Ascent GX10, a compact personal AI supercomputer accelerated by the NVIDIA GB10 Grace Blackwell Superchip, will be available from October 15. Delivering up to 1 petaFLOP of AI performance for demanding workloads and equipped with an NVIDIA Blackwell GPU, a 20-core Arm CPU, and 128GB of memory, GX10 supports AI models of up to 200-billion parameters, bringing petaflop-scale inferencing directly to developers' desktops. Optimizing AI workloads with AMD EPYC 9005 processors In addition, ASUS showcased server solutions powered by AMD EPYC™ 9005 processors, offering high performance and density for AI-driven, mission-critical data center workloads. ASUS ESC8000A-E13X accelerates generative AI and LLM applications, fully compatible with NVIDIA RTX PRO 6000 Blackwell Server Edition, while an embedded NVIDIA ConnectX-8 SuperNIC supports 400G InfiniBand/ Ethernet per QSFP port for ultra-low latency and high-bandwidth connectivity, enabling unmatched scale-out performance with NVIDIA Quantum InfiniBand or Spectrum-X Ethernet networking platforms. The RS520QA-E13 series of servers are high-performance multi-node systems optimized for HPC, EDA, and cloud computing, supporting up to 20 DIMM slots per node with advanced CXL memory expansion, PCIe 5.0, and OCP 3.0, maximizing efficiency for demanding workloads. Join the ASUS 2025 OCP Global Summit session Don't miss the 15-minute ASUS Infrastructure for Every Scale—from Edge to Trillion-Token AI session at Expo Hall Stage on October 15 from 16:25–16:40. During this insightful presentation we will share how ASUS helps customers build future-ready AI data centers. Learn how our servers, rack-scale ASUS AI PODs with NVIDIA GB200/GB300 NVL72, and high-serviceability designs address diverse AI workloads and deployment challenges.
OCP Launches New "Open Data Center for AI" Strategic Initiative SAN JOSE, Calif., Oct. 14, 2025 /PRNewswire/ -- Today, the Open Compute Project Foundation (OCP), the nonprofit international organization bringing at-scale innovations and hyperscale best practices to all, announces an expansion to its Open Systems for AI, an umbrella initiative with the new Open Data Center for AI Strategic Initiative (SI), to increase efforts on key data center infrastructure challenges: power, cooling, mechanical, and management telemetry. The addition of this new strategic initiative is in response to a large increase in data center physical infrastructure projects and workstreams launched in the past year, learnings from the OCP Open Systems for AI SI workshop series, and the new open-letter call for collaboration. With strong support from the OCP Board and stakeholders, the Foundation invites other organizations to sign this letter, which was initiated by Google, Meta, and Microsoft.. This underscores the OCP Foundation's mission to support the entire open data center ecosystem, covering IT as well as physical data center infrastructure and facilities. The mandate of the Open Data Center for AI SI is to develop standardizations for data center infrastructure allowing advanced high-density AI infrastructure to be deployed as flexibly as traditional compute where facilities are built with a common understanding of management telemetry, advanced power and cooling technologies to enable simpler deployment of a wide variety of AI solutions. The issue faced by data center partners including hyperscalers, neoclouds, co-location providers, enterprise users, and technology providers is that siloed efforts produce competing design requirements that slows innovations and extends deployment timelines. The goal is to identify and specify requirements for AI data centers such that the physical infrastructure common ground enables fungibility for a diverse AI IT infrastructure, especially while aspects of the AI IT elements are rapidly evolving. This will enable colocation data center providers to support a wider range of tenants with fewer customizations. "The OCP Community's vision of the open data center ecosystem continues to enable solving the challenges of building at-scale AI clusters and the infrastructure that houses them. We are continuing to utilize OCP's open, collaborative, and unique value proposition and its large community and ecosystem to develop open specifications and standards that address the bottlenecks that threaten to constrain the future of AI growth. We firmly believe OCP's role in fostering development of open, standardized, sustainable, and scalable infrastructure to be increasingly vital to the industry and its supply chain enabling it to deliver on AI's transformative potential cost effectively and with faster TTM, while managing its environmental impact," said George Tchaparian, CEO at the Open Compute Project Foundation. The Open Data Center for AI SI will be able to build on several work efforts already underway within the OCP Community The new Coolant Distribution Unit (CDU) Project covering integration of facilities' technology cooling systems and facility water systems into IT rack liquid cooling a facilities-level Power Distribution Project covering transition to a Direct Current distribution architecture that support high-powered IT rack. Other notable and recent contributions include Mt Diablo (Diablo 400) power-rack sidecar for powering AI clusters, co-authored by Google, Meta and Microsoft; Deschutes Coolant Distribution Unit (CDU), authored by Google; Clemente, for high-performance AI compute trays, authored by Meta; and Hyperscale CPU RAS and Debug Requirements for standardized debug capabilities for CPUs in hyperscale environments, co-authored by AMD, Google and Microsoft. The Diablo specification by Google, Meta and Microsoft describes a disaggregated power rack, or sidecar rack, pushing power delivery from today's 48 volts direct current (VDC) within the rack to either +/-400 VDC or 800VDC. The Diablo specification defines power solutions for high-density AI racks, enabling IT racks from 100 kilowatts up to 1 megawatt. More than simply increasing power delivery capacity, selecting 400 VDC as the nominal voltage leverages the supply chain established by electric vehicles, for greater economies of scale, proven quality, and more efficient manufacturing, by standardizing the electrical and mechanical interfaces. The Deschutes CDU is targeted to support ~2 MW heat loads, with hydraulic capacity targets of 500 GPM at 80-90 psi, which would be among the highest CDU thermal capacities available in the industry. It promises to enhance thermal management and operational efficiency. The specification will enable any CDU supplier in the industry to develop, manufacture and improve upon the design. The CDU is assembled from components that are sourced from multiple vendors that are widely known in the industry, allowing vendors to build and data center owners to be able purchase a CDU based on this specification. Beyond supply chain considerations, installation and maintenance procedures are shared to enable fast deployments of reliable equipment. The Clemente specification describes a 1RU tall compute tray which integrates two NVIDIA GB300 Host Processor Modules (HPM) into a form factor with peripherals that support Meta's AI/ML training and inference use cases. It also represents a milestone of a first deployment of a design that uses OCP ORv3 HPR (in-progress specification contribution) with side car power racks. The platform includes both air cooled and liquid cooled components. CPU, GPU and switch will be liquid cooled, with the remaining components air cooled. OCP's above Open Systems for AI efforts continue to solidify OCP as the premiere open organization accelerating deployment of AI data centers. These resources and more are collected on OCP's newly opened AI portal on the OCP Marketplace, providing one location for AI cluster designers, builders and facility providers to find the latest available AI infrastructure products and reference material. "With the AI infrastructure market moving very fast, there is a risk of higher costs due to fragmentations. It is the right time for an organization like the OCP to be facilitating a community to determine commonalities in data center facilities and IT Infrastructure that can help accelerate the market for future generations of AI cluster deployments and data center facility builds," said Alan Weckel, Founder and Technology Analyst, 650 Group. About the Open Compute Project FoundationThe Open Compute Project (OCP) brings at-scale innovations and hyperscaler best practices to all, spanning technology domains from the data center to the edge, and the technology stack from silicon, to systems, to site facilities and services. The international OCP Community is made up of organizations and people from hyperscale and tier-2 cloud data center operators, communications providers, colocation providers, diverse enterprises, and technology vendors. With the tenets of openness, impact, efficiency, scale and sustainability, the OCP engages and educates thousands of engineers every year. Across many projects and initiatives, the OCP Foundation and Community are meeting the market today and shaping the future. Learn more at: www.opencompute.org. Media ContactDirk Van SlykeOpen Compute Project FoundationVice President, Chief Marketing Officerdirkv@opencompute.org Mobile: +1 303-999-7398(Central Time Zone/CST/Austin, TX)
TAIPEI, Oct. 14, 2025 /PRNewswire/ -- InPsytech, a subsidiary of Egis Technology (6462.TWO) specializing in high-speed interface intellectual property (IP) development, announced today that it will participate in the Open Compute Project (OCP) Global Summit 2025, to be held in San Jose, California, from October 13–16, 2025. At the event, InPsytech will showcase its latest 3nm UCIe high-speed interface technology demo, featuring support for the newest UCIe 3.0 standard, 3D packaging integration, and ultra-high-speed, low-power performance. This demonstration highlights InPsytech's advanced R&D capabilities in Chiplet interconnect technology and its role in supporting Egis Group's broader strategy in semiconductor design and heterogeneous integration. Industry-Leading 3nm 64G UCIe Technology — High Speed, Low Power, Full UCIe 3.0 Support InPsytech has long focused on high-speed and low-power interface IP technologies. Its UCIe (Universal Chiplet Interconnect Express) portfolio supports advanced process nodes from 22nm down to 2nm, optimized for 3nm production. The IP achieves data transfer rates up to 64 GT/s, while supporting 2.5D and 3D packaging architectures to enable high-efficiency chip-to-chip interconnects and heterogeneous integration. The showcased 3nm UCIe 3.0 demo fully complies with the latest UCIe 3.0 specification, demonstrating InPsytech's leadership in next-generation process technology and Chiplet ecosystem development. Collaborating with Alcor Micro to Advance the Arm Chiplet Ecosystem InPsytech's UCIe technology has been successfully adopted in Alcor Micro Corp.'s latest Arm-based CPU platform Mobius100 (CSS V3), which will also be featured at the OCP 2025 Summit. The platform is built on Arm Neoverse CSS architecture, supporting CPU die-to-die interconnects and flexible integration with GPUs, NPUs, and various AI accelerators — driving advancements in heterogeneous computing and Chiplet design. Both InPsytech and Alcor Micro are members of the Arm Total Design (ATD) program, jointly fostering collaboration and innovation across the Arm Chiplet ecosystem. "We are proud to demonstrate our leadership in this field alongside Alcor Micro," said David Hsu, COO of InPsytech. "We have strong confidence in the future of UCIe within the Arm Chiplet ecosystem and believe that InPsytech's UCIe technology will accelerate Chiplet adoption, bringing new innovation and breakthroughs to the global semiconductor industry." Exhibition Details Event: OCP Global Summit 2025Date: October 13–16, 2025Location: San Jose Convention Center, California, USAExhibit Zone: Innovation Village ZoneShowcase: 3nm UCIe 3.0 Demo (with 3D Package Support)Partner: Alcor Micro Corp.
加利福尼亞州聖荷西2025年10月13日 /美通社/ -- 作為專業伺服器設計與製造商,神達控股股份有限公司(股票代號:3706)旗下子公司神雲科技股份有限公司(MiTAC Computing Technology Corporation),將於2025 OCP Global Summit(10月13日至16日,舊金山聖荷西)隆重登場,並於C14展位展出最新AI叢集與資料中心解決方案。此次展出主軸為「From AI Server to Cluster – Open for Growth. Built to Cool.」,展現從單一AI伺服器擴展至完整叢集的開放式基礎架構,並搭配創新的液冷節能設計,全面提升效能與永續性。 MiTAC Computing Showcases Future-Ready AI Cluster Solutions at the 2025 OCP Global Summit to Empower Open and Energy-Efficient Data Centers 此次展出涵蓋AI、HPC、雲端與企業級應用,完整呈現神雲科技伺服器從單機到叢集的整合實力,並攜手AMD、Broadcom、CoolIT、Intel、Micron、Murata、NVIDIA與 Solidigm等合作夥伴,共同推動開放運算與永續發展。 從Server到Cluster|以氣冷到水冷,AI到HPC的機櫃,展示完整資料中心解決方案神雲科技本次展會中展示完整機櫃級解決方案,進一步展現「From AI Server to Cluster」的核心理念。現場展出的OCP ORv3機櫃與EIA標準機櫃,分別代表新一代開放式基礎架構與傳統企業資料中心的兩大方向。 OCP ORv3液冷機櫃|液冷散熱與模組化電源,實現永續高效能資料中心神雲科技展示的OCP ORv3 43OU液冷機櫃,搭載多達14台C2811Z5多節點伺服器,每節點支援AMD EPYC™ 9005系列處理器與DDR5記憶體擴充,專為高效能運算與大規模資料中心部署而設計。整體叢集配置中,加入Lake Erie儲存模組,搭配管理交換器與數據交換器,實現運算、儲存與網路的完整協作。 在基礎設施部分,機櫃導入Murata 33kW Power Shelf MWOCES-211-P-D電源系統,提供高效率且具備彈性的電力管理,足以支援持續高負載的運算環境。散熱方面,配置 CoolIT 200kW CHx200+ In-Rack CDU液冷解決方案,具備200kW等級散熱能力,有效將高密度運算伺服器的熱能導出,確保系統在長時間運作下仍維持穩定效能與節能表現。 透過OCP ORv3的模組化設計與開放式架構,神雲科技的液冷機櫃不僅展現了高效能與高可靠性,更兼顧能源效率與永續性,為資料中心提供從單機到叢集、從氣冷到液冷的完整升級路徑,加速AI、HPC與雲端應用的導入與擴展。 EIA 氣冷機櫃|標準化架構結合800G高速互連,推動 AI 叢集快速部署神雲科技在本次展會展示的EIA 45U氣冷機櫃,配置4台G8825Z5 8U AI伺服器,搭載 AMD Instinct™ MI350X/ MI325X GPU,為大型語言模型訓練與生成式AI推論提供強大算力。網路部分導入Dell Z9864F-ON交換器,其核心採用Broadcom Tomahawk 5晶片,可支援800G高速互連,確保節點間資料傳輸的低延遲與高可靠性。機櫃同時整合GC68C-B8056管理伺服器與TS70A-B8056儲存伺服器,實現高效能計算與高速資料存取的緊密結合。透過伺服器、網路、管理與儲存的完整整合,神雲科技將傳統EIA標準架構延伸至叢集級別,幫助企業在不改變現有資料中心基礎設施的情況下,快速建置相容、可擴充且具高可用性的AI/HPC叢集環境,真正落實「From Server to Cluster」的理念。 Live Demo|開源韌體驅動透明、安全與永續的資料中心在C14攤位,神雲科技也將進行Live Demo,展示OpenBMC與Open Platform Firmware (OPF) 如何在真實環境中改變資料中心的管理模式。透過與Open Source Firmware Foundation、ISV 與開放硬體社群的合作,神雲科技展示了以開放韌體取代專有堆疊的創新路徑:包括透過Redfish實現一致化伺服器管理、以 Coreboot/LinuxBoot/UEFI等可選架構加速開機並縮短POST時間達50%,同時結合Broadcom MegaRAID 9560-16i / MegaRAID 9660-16i RAID卡,進一步強化資料存取的可靠性與安全性,再透過可審計的安全機制支援SBOM與NIST 800-193防護。 此外,神雲科技也與全球領導廠商共同推動OCP OPF規範的制定,並展示已在AMD EPYC™ 9005/9004系列處理器平台上完成的Open Platform Firmware PoC,進一步驗證其在真實硬體上的可行性與未來擴展潛力。這場Demo將讓與會者直觀體驗開放、模組化且具永續性的基礎架構,如何為未來Server到Cluster的擴展提供透明度、掌控力與長期安全性。 在展示叢集解決方案與開源韌體創新之後,神雲科技也將完整呈現其伺服器產品線,涵蓋 AI運算、HPC與雲端、以及企業級資料處理,展現從單機到叢集的全面升級實力。 AI 運算平台|液冷與 GPU 加速全面升級,驅動大規模 AI 訓練與生成式應用 G4527G6:採用NVIDIA MGX™ 4U架構,搭配雙路Intel® Xeon® 6767P處理器,可配置8張NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs與Solidigm D7-P5520 SSD,結合GPU的強大平行運算效能與SSD的高效資料存取,特別適合電腦視覺、深度學習模型訓練與AI應用開發。 G4826Z5:首次亮相的AI液冷高密度GPU平台,支援多達8顆AMD Instinct™ MI355X GPU與AMD EPYC™ 9005/9004系列處理器,同時可擴充至24個DDR5-6400記憶體插槽,最高容量達6TB。平台採用CPU與GPU全面液冷設計,確保在高密度配置下仍能維持長時間穩定效能。搭載Broadcom P2200G網路介面卡,提供高速、低延遲的連接能力,專為大規模AI訓練與資料密集型運算打造,是本次展會的重點液冷旗艦產品。 HPC與雲端的最佳實踐|OCP標準平台,兼顧高效能與大規模部署 C2811Z5:符合OCP標準的液冷High Density多節點伺服器,支援AMD EPYC™ 9005系列處理器,每節點可配置12個DDR5-6400記憶體插槽,最高容量達 3TB。搭載 NVMe E1.S儲存介面,並配合Micron 9550 NVMe SSD,在高頻寬存取與長時間運算下,依然能保持穩定效能,特別適合科學模擬、工程設計與氣象分析等HPC場景。其液冷設計不僅有效提升散熱效率,更協助資料中心在效能、能源效率與可靠性間取得最佳平衡。 Capri 3:OCP雲端伺服器平台,具備模組化與彈性擴充特性,並搭配 Broadcom N1400GD網路介面卡,提供高速且穩定的網路連線。適合雲端虛擬化、軟體定義儲存與大型資料湖架構,能靈活應對不同的雲端工作負載與大規模資料中心部署需求。 Enterprise 資料處理與儲存|靈活擴充與高可靠性,支援企業級應用與資料密集工作負載 R1520G6:企業級1U伺服器,搭載Intel® Xeon® 6700P系列處理器,並結合Micron DDR5 DRAM與 Micron 6550 ION NVMe SSD,在高記憶體需求與長時間高密度讀寫場景中,展現卓越效能與耐久性,滿足企業級應用對可靠性與高效率的嚴苛要求。 R2520G6:企業級2U伺服器,支援雙路Intel® Xeon® 6700P系列處理器與多達 24顆NVMe U.2 SSD。搭載 Solidigm D7-PS1010 SSD,具備PCIe 5.0高頻寬與長時間效能穩定性,特別適合資料儲存、大數據分析與 AI 資料預處理等需要高速儲存的應用,幫助企業快速將龐大資料轉化為即時洞察,協助決策與營運優化。 除了完整的機櫃解決方案與現場 Live Demo,神雲科技也將於10 月 14日舉辦兩場高階論壇(Executive Sessions),深入探討AI叢集的未來發展趨勢以及永續資料中心的架構演進。神雲科技誠摯邀請產業先進蒞臨C14展位,親身體驗最新的伺服器與叢集解決方案,並與我們一同交流,探索「From Server to Cluster」的創新之路。 如需產品型錄與更多資訊,可參閱:MiTAC Computing 2025 OCP Global Summit Landing Page 神雲科技官方網站:https://www.mitaccomputing.com/Intel 平台型錄AMD 平台型錄
SAN JOSE, Calif., Oct. 13, 2025 /PRNewswire/ -- PEGATRON, a globally recognized total server solution provider, delivers end-to-end expertise across AI, HPC, cloud, networking, storage, and cluster-scale data center deployments, will present its latest innovations at the OCP Global Summit 2025, highlighting a full portfolio of next-generation platforms designed for AI, HPC, and professional visualization workloads. The showcase features NVIDIA GB300 NVL72, NVIDIA HGX B300, AMD Instinct™ MI355X platforms and NVIDIA RTX PRO servers. PEGATRON OCP Global Summit 2025 NVIDIA GB300 NVL72 - Accelerating AI Reasoning with Extreme-scale Performance At the forefront of PEGATRON's lineup is the RA4802-72N2, featuring the NVIDIA GB300 NVL72 platform with 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace CPUs. Featuring 18 compute trays, 9 NVIDIA NVLink switch trays, and 18 NVIDIA BlueField-3 DPUs with NVIDIA ConnectX-8 SuperNICs, the system delivers 130 TB/s aggregate NVLink Switch bandwidth to accelerate large-scale AI training and inference. PEGATRON has officially begun shipping its RA4802-72N2 system, built on the NVIDIA GB300 NVL72 platform, delivering industry-leading AI performance with streamlined deployment for large-scale data centers. NVIDIA HGX B300 Platform - Scaling AI from Server to Rack to Cluster At the core of PEGATRON's AI reasoning solutions is the NVIDIA HGX B300 platform, accelerated by NVIDIA Blackwell Ultra GPU, which delivers 7X more AI compute than NVIDIA Hopper, enabling next-generation AI reasoning and the most demanding workloads across data centers. The PEGATRON AS402-2T1-8H2 is a powerful 4U liquid-cooled system with 8 NVIDIA Blackwell Ultra GPUs, featuring 2.1 TB HBM3e memory, dual Intel® Xeon® 6 processors, and NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s ultra-fast networking. For organizations requiring air cooled configurations, the AS801-2T1-8H2 offers an 8U solution with the same GPU and CPU configuration, giving enterprises deployment flexibility while maintaining exceptional performance. Scaling further, the RA4400-64H1 is a 44U liquid-cooled rack solution that integrates 8 individual 4U systems, delivering a total of 64 NVIDIA Blackwell Ultra GPUs and 16 Intel® Xeon® 6 processors for unprecedented compute density. AMD Instinct™ MI355X Platform - Breakthrough AI Supercomputing with Ultra High-Density 128-GPU per Rack PEGATRON expands its AMD Instinct™ portfolio with the AS501-4A1-16I1, a high-density liquid-cooled system featuring 4 AMD EPYC™ 9005 processors and 16 AMD Instinct™ MI355X GPUs in a 5OU system, equipped with 288 GB HBM3E memory per GPU and 8 TB/s bandwidth. Scaling up to the RA5100-128I1, an ultra high-density liquid-cooled rack solution with 128 GPUs and 32 CPUs, provides a powerful foundation for AI training, generative AI, HPC, and scientific computing. NVIDIA RTX PRO Servers – The Ultimate Universal Data Center Computing Platform for Enterprise and Industrial AI Completing PEGATRON's portfolio are the two new server platforms powered by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. The AS205-2T1 is a compact 2U, 2-socket system featuring 4 NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs (TDP 600W) and dual Intel® Xeon® 6 processors, designed for top-tier performance in space-constrained environments. For higher density and scalability, the AS400-2A1-CX8 is a powerful 4U, 2-socket system with 8 NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs (TDP 600W), dual AMD EPYC™ 9005 processors, and NVIDIA ConnectX-8 SuperNICs with PCIe Gen 6 switch for maximum throughput. Together, these systems integrate NVIDIA Blackwell GPUs to deliver cutting-edge AI performance, photorealistic rendering, generative AI, digital twin simulations, and complex industrial workflows. "AI is transforming every industry, and the demand for advanced infrastructure is accelerating rapidly," said May Wang, General Manager of PEGATRON Server Business. "PEGATRON is ready for the AI-driven future with platforms engineered for next-generation performance and proven in real-world deployments, enabling enterprises to scale seamlessly from server efficiency to cluster-scale supercomputing." PEGATRON continues to push the boundaries of AI and HPC infrastructure, delivering platforms that combine performance, efficiency, and scalability. We invite industry leaders, partners, and innovators to visit us at the OCP Global Summit 2025, Booth A33, and experience how PEGATRON is powering the future of intelligent data centers. For more information, please visit PEGATRON SVR website and follow us on LinkedIn and YouTube. https://svr.pegatroncorp.com https://www.linkedin.com/showcase/pegatron-svr/?originalSubdomain=tw https://www.youtube.com/@pegatroncorp.6158 About PEGATRON PEGATRON Corporation (hereafter referred to as "PEGATRON"), with abundant product development experience and vertically integrated manufacturing, is committed to providing clients with innovative design, systematic production, and manufacturing services to comprehensively and efficiently satisfy all our customers' needs. Drawing on accumulated experience in server design, manufacturing, and deployment, PEGATRON focuses on developing a variety of state-of-the-art servers including liquid-cooled/air-cooled server solutions based on x86 and ARM architectures, racks, and AI clusters, that meet the requirements of present and future Cloud Service Providers' data centers, as well as enterprise-grade data centers. PEGATRON Corporation Website: https://www.pegatroncorp.com/
A12 藝術空間
CPU
請先登入後才能發佈新聞。
還不是會員嗎?立即 加入台灣產經新聞網會員 ,使用免費新聞發佈服務。 (服務項目) (投稿規範)