關於 cookie 的說明

本網站使用瀏覽器紀錄 (Cookies) 來提供您最好的使用體驗,我們使用的 Cookie 也包括了第三方 Cookie。相關資訊請訪問我們的隱私權與 Cookie 政策。如果您選擇繼續瀏覽或關閉這個提示,便表示您已接受我們的網站使用條款。

搜尋結果Search Result

符合「CPU」新聞搜尋結果, 共 802 篇 ,以下為 49 - 72 篇 訂閱此列表,掌握最新動態
ASUS Unveils the Latest ASUS AI POD Featuring NVIDIA GB300 NVL72

Major orders secured as ASUS expands leadership in AI infrastructure solutions SAN JOSE, Calif., March 19, 2025 /PRNewswire/ -- ASUS today joined GTC 2025 (Booth #1523) as a diamond sponsor to showcase the latest ASUS AI POD with the NVIDIA® GB300 NVL72 platform. The company is also proud to announce that it has already garnered substantial order placements, marking a significant milestone in the technology industry. ASUS XA NB3I-E12 features HGX B300 NVL16 delivers breakthrough performance to meet the evolving needs in every data center At the forefront of AI innovation, ASUS also presents the latest AI servers in the Blackwell and HGX™family line-up. These include ASUS XA NB3I-E12 powered by NVIDIA B300 NVL16, ASUS ESC NB8-E11 with NVIDIA DGX B200 8-GPU, ASUS ESC N8-E11V with NVIDIA HGX H200 and ASUS ESC8000A-E13P/ESC8000-E12P will support NVIDIA RTX PRO 6000 Blackwell Server Edition with MGX architecture. ASUS is positioned to provide comprehensive infrastructure solutions in combination with the NVIDIA AI Enterprise and NVIDIA Omniverse platforms, empowering clients to accelerate their time to market. ASUS AI POD with NVIDIA GB300 NVL72 By integrating the immense power of the NVIDIA GB300 NVL72 server platform, ASUS AI POD offers exceptional processing capabilities – empowering enterprises to tackle massive AI challenges with ease. Built with NVIDIA Blackwell Ultra, GB300 NVL72 leads the new era of AI with optimized compute, increased memory, and high-performance networking, delivering breakthrough performance. It's equipped with 72 NVIDIA Blackwell Ultra GPUs and 36 Grace CPUs in a rack-scale design delivering increased AI FLOPs, providing up to 40TB of high-speed memory per rack. It also includes networking platform integration with NVIDIA Quantum-X800 InfiniBand and Spectrum-X Ethernet, SXM7 and SOCAMM modules designed for serviceability, 100% liquid-cooled design and support for trillion-parameter LLM inference and training with NVIDIA. ASUS has shown expertise in building NVIDIA GB200 NVL72 infrastructure from the ground up. To achieve peak computing efficiency with software-defined storage architectural paradigm, also on show is ASUS RS501A-E12-RS12U. This powerful SDS server effectively reduces the latency of data training and inferencing, and complements NVIDIA GB200 NVL72. ASUS presents extensive service scope from hardware to cloud-based applications, covering architecture design, advanced cooling solutions, rack installation, large validation/deployment and AI platforms to significantly harness its extensive expertise to empower clients to achieve AI infrastructure excellence. Kaustubh Sanghani, vice president of GPU products at NVIDIA, commented: "NVIDIA is working with ASUS to drive the next wave of innovation in data centers. Leading ASUS servers combined with the Blackwell Ultra platform will accelerate training and inference, enabling enterprises to unlock new possibilities in areas such as AI reasoning and agentic AI." GPU servers for heavy generative AI workloads ASUS will also showcase a series of NVIDIA-certified servers, supporting applications and workflows built with the NVIDIA AI Enterprise and Omniverse platforms. ASUS 10U ESC NB8-E11 is equipped with the NVIDIA Blackwell HGX B200 8-GPU for unmatched AI performance. ASUS XA NB3I-E12 features HGX B300 NVL16, featuring increased AI FLOPS, 2.3TB of HBM3e memory, and networking platform integration with NVIDIA Quantum-X800 InfiniBand and Spectrum-X Ethernet, Blackwell Ultra delivers breakthrough performance for AI reasoning, agentic AI and video inference applications to meet the evolving needs in every data center. Finally, the 7U ASUS ESC N8-E11V dual-socket server is powered by eight NVIDIA H200 GPUs, supports both air-cooled and liquid-cooled options, and is engineered to provide effective cooling and innovative components. Scalable servers to master AI inference optimization ASUS also presents server and edge AI options for AI inferencing – the ASUS ESC8000 series embedded with the latest NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. ASUS ESC8000-E12P is a high-density 4U server for eight dual-slot high-end NVIDIA H200 GPUs and support software suite on NVIDIA AI Enterprise and Omniverse. Also, it's fully compatible with NVIDIA MGX architecture to ensure flexible scalability and fast, large-scale deployment. Additionally, the ASUS ESC8000A-E13P, 4U NVIDIA MGX server, supports eight dual-slot NVIDIA H200 GPUs, provides seamless integration, optimization and scalability for modern data centers and dynamic IT environments. Groundbreaking AI supercomputer, ASUS Ascent GX10 ASUS today also announces its groundbreaking AI supercomputer, ASUS Ascent GX10, in a compact package. Powered by the state-of-the-art NVIDIA GB10 Grace Blackwell Superchip, it delivers 1,000 AI TOPS performance, making it ideal for demanding workloads. Ascent GX10 is equipped with a Blackwell GPU, a 20-core Arm CPU, and 128GB of memory, supporting AI models with up to 200-billion parameters. This revolutionary device places the formidable capabilities of a petaflop-scale AI supercomputer directly onto the desks of developers, AI researchers and data scientists around the globe. ASUS IoT showcases its Edge AI computers at GTC, featuring the PE2100N with NVIDIA Jetson AGX OrinTM, delivering 275 TOPS for generative AI and robotics. The PE8000G supports dual 450W NVIDIA RTXTM GPUs, excelling in real-time perception AI. With rugged designs and wide operating temperature, both are ideal for computer vision, autonomous vehicles and intelligent video analytics. AVAILABILITY & PRICING ASUS AI infrastructure solutions and servers are available worldwide. Please contact your local ASUS representative for further information.

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 125 加入收藏 :
Supermicro Expands Enterprise AI Portfolio of over 100 GPU-Optimized Systems Supporting the Upcoming NVIDIA RTX PRO 6000 Blackwell Server Edition and NVIDIA H200 NVL Platform

With a Broad Range of Form Factors, Supermicro's Expanded Portfolio of PCIe GPU Systems Can Scale to the Most Demanding Data Center Requirements, with up to 10 Double Width GPUs to Low Power Intelligent Edge Systems Providing Maximum Flexibility and Optimization for Enterprise AI LLM-Inference Workloads SAN JOSE, Calif., March 19, 2025 /PRNewswire/ -- GTC 2025 Conference – Supermicro, Inc. (NASDAQ: SMCI) a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, today announced support for the new NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPUs on a range of workload-optimized GPU servers and workstations. Specifically optimized for the NVIDIA Blackwell generation of PCIe GPUs, the broad range of Supermicro servers will enable more enterprises to leverage accelerated computing for LLM-inference and fine-tuning, agentic AI, visualization, graphics & rendering, and virtualization. Many Supermicro GPU-optimized systems are NVIDIA Certified, guaranteeing compatibility and support for NVIDIA AI Enterprise to simplify the process of developing and deploying production AI.  Supermicro GPU for Enterprise AI "Supermicro leads the industry with its broad portfolio of application optimized GPU servers that can be deployed in a wide range of enterprise environments with very short lead times," said Charles Liang, president and CEO of Supermicro. "Our support for the NVIDIA RTX PRO 6000 Blackwell Server Edition GPU adds yet another dimension of performance and flexibility for customers looking to deploy the latest in accelerated computing capabilities from the data center to the intelligent edge. Supermicro's broad range of PCIe GPU-optimized products also support NVIDIA H200 NVL in 2-way and 4-way NVIDIA NVLink™ configurations to maximize inference performance for today's state-of-the-art AI models, as well as accelerating HPC workloads." For more information, please visit https://www.supermicro.com/en/accelerators/nvidia/pcie-gpu. The NVIDIA RTX PRO 6000 Blackwell Server Edition is a universal GPU, optimized for both AI and graphics workloads. The new GPU features significantly enhanced performance compared to the prior generation NVIDIA L40S, including faster GDDR7 memory and 2x more memory capacity, PCIe 5.0 interface support to allow faster GPU-CPU communication, and new Multi-Instance GPU (MIG) capabilities to allow for sharing of a single GPU across up to 4 fully-isolated instances. In addition, Supermicro GPU-optimized systems are designed to also support NVIDIA SuperNICs such as NVIDIA BlueField®-3 and NVIDIA ConnectX®-8 for the best infrastructure scaling and GPU clustering with NVIDIA Quantum InfiniBand and NVIDIA Spectrum Ethernet. "The NVIDIA RTX PRO 6000 Blackwell Server Edition is the ultimate data center GPU for AI and visual computing, offering unprecedented acceleration for the most demanding workloads," said Bob Pette, Vice President of Enterprise Platforms at NVIDIA. "The NVIDIA RTX PRO 6000 Blackwell Server Edition expands Supermicro's broad lineup of NVIDIA-accelerated systems to speed virtually every workload across AI development and inference." In addition to the enterprise-grade NVIDIA RTX PRO 6000 Blackwell Server Edition, selected Supermicro workstations will also support the new NVIDIA RTX PRO 6000 Blackwell Workstation Edition and NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition, the most powerful professional-grade GPUs for AI processing and development, 3D rendering, media, and content creation workloads. Supermicro system families supporting the new GPUs include the following: NVIDIA RTX PRO 6000 Blackwell Server Edition 5U PCIe GPU – Highly flexible, thermally optimized architectures designed to support up to 10 GPUs in a single chassis with air cooling. Systems feature dual-socket CPUs and PCIe 5.0 expansion to facilitate high speed networking. Key workloads include AI inference and fine-tuning, 3D rendering, digital twin, scientific simulation, and cloud gaming. NVIDIA MGX™ - GPU-optimized systems based on NVIDIA modular reference design, supporting up to 4 GPUs in 2U or 8 GPUs in 4U to support industrial automation, scientific modeling, HPC, and AI inference applications. 3U Edge-optimized PCIe GPU – Compact form factor designed for edge data center deployments and supporting up to 8 double-width or 19 single-width GPUs per system. Key workloads include EDA, scientific modeling, and edge AI inferencing. SuperBlade® - Density-optimized and energy-efficient multi-node architecture designed for maximum rack density, with up to 120 GPUs per rack. Rackmount Workstation – Workstation performance and flexibility in a rackmount form factor, offering increased density and security for organizations looking to utilize centralized resources. NVIDIA RTX PRO 6000 Blackwell Workstation Edition and NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Tower Workstation – A range of desktop and under-desk form factors designed for AI, 3D media, and simulation applications, ideal for AI developers, creative studios, educational institutions, field offices, and laboratories. Supporting other currently available GPUs including H200/H100 NVL, L40S, L4, and more: 4U GPU-optimized – Up to 10 double-width GPUs with single root and dual root configurations available, as well as tower GPU servers supporting up to 4 double-width GPUs. 1U and 2U MGX™ - Compact GPU-optimized systems based on NVIDIA's modular reference design with up to 4 double-width GPUs. 1U and 2U rackmount platforms – Flagship-performance Hyper and Hyper-E, and Cloud Data Center optimized CloudDC supporting up to 4 double-width or 8 single-width GPUs. Multi-processor – 4 and 8-socket architectures designed for maximum memory and I/O density with up to 2 double-width GPUs in 2U or 12 double-width GPUs in 6U. Edge – Compact edge box PCs supporting 1 double-width GPU or 2 single-width GPUs. About Super Micro Computer, Inc.Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first-to-market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions manufacturer with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next-generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Asia, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling). Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc. All other brands, names, and trademarks are the property of their respective owners.  

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 268 加入收藏 :
MinIO Deepens Support for the NVIDIA AI Ecosystem

New MinIO AIStor integrations leverage NVIDIA's emerging infrastructure technologies and capabilities to rapidly deliver unparalleled innovation in AI storage at the multi-exabyte scale REDWOOD CITY, Calif., March 17, 2025 /PRNewswire/ -- To further support the demands of modern AI workloads, MinIO, the leader in AI data storage for the exascale data era, today at NVIDIA GTC unveiled three crucial, upcoming advancements to MinIO AIStor that deepen its support for the NVIDIA AI ecosystem. The new integrations will help users maximize the utilization and efficiency of their AI infrastructures while streamlining their management, freeing up personnel for more strategic AI activities. The new MinIO AIStor features include: Support for NVIDIA GPUDirect Storage (GDS) for object storage: Delivers significant increase in CPU efficiency on the NVIDIA GPU server by avoiding the traditional data path through the CPU, freeing up compute for additional AI data processing while reducing infrastructure costs via support for Ethernet networking fabrics. Native integration with NVIDIA BlueField-3 networking platform: Drives down object storage Total Cost of Ownership (TCO) while ensuring industry-leading performance, optimizing data-driven and AI workloads at petabyte to exabyte scale for modern enterprise environments. Incorporation of NVIDIA NIM microservices into AIStor promptObject inference: Brings simplified deployment and management of inference infrastructure while also enabling AIStor's new S3 API promptObject, which allows users to "talk" to unstructured objects in the same way one would engage an LLM, to deliver faster inference via model optimizations for NVIDIA hardware. "MinIO's strong alignment with NVIDIA allows us to rapidly innovate AI storage at multi-exabyte scale, leveraging their latest infrastructure," said AB Periasamy, co-founder and co-CEO, MinIO. "This approach delivers high-performance object storage on commodity hardware, enabling enterprises to future-proof their AI, maximize GPU utilization, and lower costs." Maximizing the Utilization and Efficiency of AI Compute Infrastructure (GPUs and CPUs) NVIDIA GPUDirect Storage (GDS) initially required InfiniBand, which necessitates specialized hardware. NVIDIA has extended the benefits of GDS and InfiniBand to Ethernet networks. This innovation provides flexibility, scalability, and cost efficiency, making it an ideal solution for accelerated AI adoption at scale for Enterprises. Renowned for its high performance, MinIO AIStor already fully utilizes the available per-node network bandwidth to feed data-hungry GPUs. MinIO AIStor's GDS for object storage implementation leverages Ethernet fabrics by establishing a direct data path between MinIO AIStor and NVIDIA GPU memory. This drastically improves overall GPU server efficiency, increasing resources for additional AI-related compute. MinIO AIStor and NVIDIA GDS deliver a more efficient, adaptable solution for scaling AI infrastructure, and create a streamlined, ultra-fast pipeline that turns data lakes into high-speed AI/ML training environments. This integration with NVIDIA GDS for object storage reduces the burden on NVIDIA GPU server CPUs by establishing a direct data path between object storage and GPU memory. This drastically improves overall GPU server efficiency, increasing resources for additional AI-related compute. MinIO AIStor and NVIDIA GDS deliver a more efficient, adaptable solution for scaling AI infrastructure, and create a streamlined, ultra-fast pipeline that turns data lakes into high-speed AI/ML training environments. "We are excited to see MinIO bring AIStor to the NVIDIA AI ecosystem and to explore how AIStor and GPUDirect Storage together perform under the specific demands of our workloads," said Alex Timofeyev, Director, High Performance Compute Engineering and Operations, Recursion. "Our work requires high scalability, throughput, and efficiency in handling AI workloads. Based on preliminary testing, we believe that MinIO AIStor will increase efficiency in the CPU's on our AI compute infrastructure, ultimately enhancing the performance and economics in our data environment."  Additionally, MinIO AIStor becomes the first and only object storage software to run natively on NVIDIA's BlueField-3 Data Processing Unit (DPU), made possible by AIStor's remarkably compact ~100MB footprint. This ultra-efficient, low-cost architecture completely eliminates the need for separate x64 CPUs, transforming what were already commodity storage servers into MinIO and NIC-powered JBOFs (Just a Bunch of Flash). MinIO AIStor leverages Arm's Scalable Vector Extension (SVE) instruction set to deliver MinIO's industry-leading object storage performance and inline data management features directly from NVIDIA BlueField-3 DPUs. This integration allows MinIO to be Spectrum-X ready, ensuring seamless integration with NVIDIA's next-generation networking stack for AI and high-performance workloads. It also means customers will get seamless integration with GPUDirect Storage for object storage, optimizing GPU server efficiency by minimizing data movement overhead. Maximizing Inference Performance and Streamlining Infrastructure Management MinIO AIStor simplifies AI-powered interactions with stored objects through the incorporation of AIStore promptObject into NVIDIA NIM microservices inference infrastructure. NIM provides pre-built Docker containers and Helm charts, and GPU Operator, which automates the deployment and management of drivers and the rest of the inference stack on the NVIDIA GPU server. MinIO AIStor, leveraging NVIDIA NIM microservices, accelerates time to value and frees personnel from manual data pipeline and infrastructure building, enabling them to concentrate on strategic AI initiatives. In addition, NVIDIA NIM model optimizations for NVIDIA hardware deliver accelerated promptObject inference results. These new features and integrations are open to beta customers under private preview. MinIO AIStor support for NVIDIA GDS and native integration with NVIDIA BlueField-3 networking platform will be released in alignment with NVIDIA's GA calendar. To request a demo, visit min.io. To learn more about each feature and integration, visit: MinIO AIStor support for NVIDIA GDS Native integration with NVIDIA BlueField-3 networking platform MinIO AIStor integration with NVIDIA NIM Enterprise AI Infrastructure Made Easy with AIStor and NVIDIA GPUs About MinIOMinIO is the leader in high-performance object storage for AI. With 2B+ Docker downloads 50k+ stars on GitHub, MinIO is used by more than half of the Fortune 500 to achieve performance at scale at a fraction of the cost compared to the public cloud providers. MinIO AIStor is uniquely designed to meet the flexibility and exascale requirements of AI, empowering organizations to fully capitalize on existing AI investments and address emerging infrastructure challenges while delivering continuous business value. Founded in November 2014 by industry visionaries AB Periasamy and Garima Kapoor, MinIO is the world's fastest growing object store. Media Contact: Tucker Hallowell, Inkhouse, minio@inkhouse.com   

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 100 加入收藏 :
Supermicro為邊緣人工智能帶來卓越的性能和效率

全新伺服器推動從數據中心到邊緣的更智能、更快速及更高效的人工智能,全面支援超過40%的記憶體帶寬增益,以及高達144顆CPU核心,搭載整個Intel ® Xeon ® 處理器系列 加州聖荷西和德國紐倫堡2025年3月12日 /美通社/ -- Supermicro, Inc.(納斯達克股票代碼:SMCI)為人工智能/機器學習、高效能計算、雲端、儲存及5G/邊緣運算提供全方位資訊科技解決方案的領導者,現推出一系列全新系統,專為邊緣和嵌入式工作負載進行全面優化。這些全新緊湊型伺服器中,幾款基於最新的Intel Xeon 6 SoC處理器系列(前代代號Granite Rapids-D),使企業能夠優化即時人工智能推理,並在多個關鍵行業中實現更智能的應用。 針對人工智能和嵌入式工作負載的系統 「隨著邊緣人工智能解決方案需求的增長,企業需要高可靠性、緊湊型的系統,能夠即時處理邊緣端的數據,」Supermicro主席兼行政總裁Charles Lang表示。「Supermicro設計並部署業界最廣泛的應用優化系統,涵蓋從數據中心到遠端邊緣的各種需求。我們最新一代的邊緣伺服器提供先進的人工智能功能,能夠在數據產生地點附近提升效率並支持更精準的決策。這些全新Supermicro緊湊型系統在邊緣端的核心數量最多可增加2.5倍,並在每瓦性能和每核心性能上有所提升,全面優化了邊緣人工智能、電訊、網絡和內容分發網絡(CDN)等工作負載。」 有關詳情,請瀏覽https://www.supermicro.com/en/products/embedded/servers Supermicro全新的SYS-112D系列系統專為運行高效能邊緣人工智能解決方案而設計,並搭載最新推出的Intel Xeon 6 SoC處理器,配備P核心。這些伺服器相比於先前世代的系統,具有更高的性能、改進的每瓦性能以及更大的記憶體帶寬。此外,這些新伺服器還包括人工智能加速、Intel® QuickAssist技術(支援無線協議)、Intel vRAN Boost技術、Intel®資料串流加速器等功能。 Supermicro的SYS-112D-36C-FN3P配備Intel Xeon 6 SoC處理器,擁有36顆P核心,雙100 GbE QSFP28埠,最高支援512GB的DDR5記憶體,並提供一個PCIe 5.0 FHFL插槽,可用於安裝GPU或其他擴展卡。結合Intel的媒體加速和QuickAssist技術,使得該系統非常適合邊緣人工智能和媒體工作負載。其機箱深度僅為399mm(15.7英寸),並配備前端I/O介面,便於在空間受限的環境中部署或嵌入於更大的系統中。另一款基於相同平台的伺服器,SYS-112D-42C-FN8P,提供更適合電訊領域的配置,配備8個25GbE埠、內建GNSS和時間同步技術,以及搭載Intel vRAN Boost技術的Intel Xeon 6 SoC處理器。這些功能的結合使得該型號成為一個全方位的平台,適用於RAN網絡中的各種工作負載。 Supermicro還推出了兩款全新緊湊型系統,SYS-E201-14AR和SYS-E300-14AR,這些系統專為遠端邊緣的物聯網(IoT)和人工智能推理進行優化。這兩款系統均搭載第15代Intel® CoreTM Ultra處理器(代號Arrow Lake),擁有最多24顆核心,並配備內建的NPU(神經處理單元)人工智能加速器。這兩款系統均配備兩個2.5 GbE網絡埠,以及HDMI、Display和USB連接埠,並專為企業邊緣應用場景進行優化。SYS-E300還可擴展為單個PCIe 5.0 x16插槽,允許安裝PCIe GPU卡,從而提升系統在安全監控、零售、醫療保健、製造業等領域的邊緣人工智能應用性能。 在邊緣數據中心,Supermicro的邊緣人工智能系統現在可安裝最新推出的Intel Xeon 6700/6500系列處理器,並配備P核心。這款處理器系列專為企業數據中心設計,旨在實現性能與效率之間的良好平衡,並在各種企業工作負載中,提供比上一代高出1.4倍的平均性能提升。Supermicro的2U邊緣人工智能產品系列,例如SYS-212B-FLN2T,將Intel的新處理器與最多6個單寬度GPU加速卡結合,採用短深度、前端I/O形式,既可在企業邊緣部署,也適用於電訊和空間受限的環境。 Supermicro亮相嵌入式世界展(Embedded World) 歡迎於3月11日至13日期間,前來Supermicro位於1號館208號展位,了解更多有關這些全新系統的資訊。 Super Micro Computer, Inc.簡介Supermicro(納斯達克股票代碼:SMCI)是應用優化整體資訊科技(IT)解決方案的全球領軍企業。Supermicro在美國加州聖荷西成立並營運,致力於為企業、雲端、人工智能和5G電信/Edge IT基礎設施提供率先進入市場的創新。我們是全面IT解決方案製造商,提供伺服器、人工智能、儲存、物聯網、交換器系統、軟件和支援服務。Supermicro的主板、電源和機箱設計專業知識進一步推動了我們的開發和生產,為我們的全球客戶實現了從雲端到Edge的下一代創新。我們的產品均在公司內部(包括美國、亞洲和荷蘭)完成設計和製造,透過全球營運壯大規模和加強效益,從而優化整體擁有成本(TCO),並透過綠色運算減少對環境的影響。獲獎無數的Server Building Block Solutions®透過我們靈活可重用的構建模組,為客戶提供各適其適的系統產品系列,用於優化其確切的工作負載和應用程式。這些構建模組支援全系列外形規格、處理器、記憶體、GPU、儲存、網絡、電源和冷卻解決方案(空調、自然空氣冷卻或液體冷卻)。 Supermicro、Server Building Block Solutions和We Keep IT Green是Super Micro Computer, Inc.的商標和/或註冊商標。 所有其他品牌、名稱和商標均為其各自所有者的財產。  

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 619 加入收藏 :
Microchip 推出整合高效能類比周邊的32位元PIC32A微控制器

爲滿足各行各業對高效能、計算密集型應用日益增長的需求,Microchip Technology Inc.正式發佈PIC32A系列MCU。該產品進一步擴充了公司強大的32位元MCU產品線,專爲汽車、工業、消費、人工智慧/機器學習及醫療市場提供高性價比、高效能的通用型解決方案。   32位元PIC32A MCU採用200 MHz CPU,整合高速類比周邊,旨在大幅減少對外部元件的需求。其特性包括高達40 Msps的12位元ADC、5奈秒高速比較器和100 MHz增益頻寬積(GBWP)運算放大器,適用於智慧邊緣感知。這些特性結合高效能CPU,可在單一MCU上實現多任務處理,優化系統成本和物料清單(BOM)。   此外,整合的硬體安全功能包括快閃記憶體和RAM的糾錯碼(ECC)、記憶體內置自測試(MBIST)、I/O完整性監控、時鐘監控、不可變安全啓動及快閃記憶體存取控制,旨在爲嵌入式控制系統應用中的軟體程式碼提供安全的執行環境。   PIC32A MCU內建64位元浮點單元(FPU),可高效處理資料密集型數學運算,支援模型化設計快速部署。這些MCU可協助開發者在感測器介面和資料處理等計算密集型應用中實現加速執行。   Microchip MCU業務部門副總裁Rod Drake 表示:「PIC32A系列面向智慧感測與控制應用,透過平衡成本效益、效能與先進類比周邊,擴充了我們現有32位元產品組合。高速周邊與其他整合功能減少了對特定外部組件的需求,在降低系統複雜度的同時提供高效能解決方案。」   開發工具 PIC32A MCU由MPLAB® XC32編譯器、MPLAB Harmony嵌入式軟體開發平台以及dsPIC33A Curiosity 平台開發板(EV74H48A)和PIC32AK1216GC41064通用DIM(EV25Z08A)提供支援。爲支援功能擴展,Curiosity 平台開發板配備mikroBUS™和Xplained Pro介面,可連接內建自測試Xplained Pro(BIST XPRO)擴展套件、感測器及各類Click板™。有關完整開發工具列表,請參閱PIC32A MCU網頁。   供貨與定價 PIC32A MCU系列起售價低於每件1美元(批量採購)。如需更多訊息和購買,請聯繫Microchip業務代表、全球授權經銷商或參閱Microchip採購與客戶服務網站http://www.microchipdirect.com 。

文章來源 : APR 發表時間 : 瀏覽次數 : 3027 加入收藏 :
Supermicro Brings Superior Performance and Efficiency to AI at the Edge

New Servers Drive Smarter, Faster, and More Efficient AI from the Data Center to the Edge with Entire Range of Intel® Xeon® 6 Processors Supporting Over 40% More Memory Bandwidth and up to 144 CPU Cores SAN JOSE, Calif. and NUREMBERG, Germany, March 12, 2025 /PRNewswire/ -- Supermicro, Inc. (NASDAQ: SMCI) a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is introducing a wide range of new systems which are fully optimized for edge and embedded workloads. Several of these new compact servers, which are based on the latest Intel Xeon 6 SoC processor family (formerly codenamed Granite Rapids-D), empower businesses to optimize real-time AI inferencing and enable smarter applications across many key industries. Systems for AI and embedded workloads "As the demand for Edge AI solutions grows, businesses need highly reliable, compact systems that can process data at the edge in real-time," said Charles Liang, president and CEO of Supermicro. "At Supermicro, we design and deploy the industry's broadest range of application optimized systems from the data center to the far edge. Our latest generation of edge servers deliver advanced AI capabilities for enhanced efficiency and decision-making close to where the data is generated. With up to 2.5 times core count increase at the edge with improved performance per watt and per core, these new Supermicro compact systems are fully optimized for workloads such as Edge AI, telecom, networking, and CDN." For more information, please visit https://www.supermicro.com/en/products/embedded/servers Supermicro's new SYS-112D series systems are designed to run high-performance Edge AI solutions and feature the recently launched Intel Xeon 6 SoC with P-cores. These servers feature increased performance, improved performance per watt, and higher memory bandwidth compared to previous generations of systems. In addition, the new servers include AI acceleration, Intel® QuickAssist Technology with wireless protocols, Intel vRAN Boost Technology, Intel® Data Streaming Accelerator, and more. Supermicro's SYS-112D-36C-FN3P features the Intel Xeon 6 SoC with 36 P-cores, dual 100 GbE QSFP28 ports, up to 512GB of DDR5 memory, and one PCIe 5.0 FHFL slot for a GPU or other add-on card. Combined with Intel's onboard Media Acceleration and QuickAssist technologies, this makes the system ideal for Edge AI and media workloads, and with a chassis only 399mm/15.7inches deep and with front I/O access, it can easily be deployed in space-constrained environments or embedded in larger systems. Another server based on the same platform, the SYS-112D-42C-FN8P, provides a more telco-optimized configuration, featuring 8 25GbE ports, built-in GNSS and time sync technology, and an Intel Xeon 6 SoC model featuring Intel vRAN Boost. The combination of these features makes this model an all-in-one platform for various workloads in the RAN network. Supermicro is also introducing two new compact systems, the SYS-E201-14AR and SYS-E300-14AR, which are optimized for IoT and AI inferencing at the far edge. Both systems feature the 15th Gen Intel® CoreTM Ultra processors (codenamed Arrow Lake), which features up to 24 cores and an onboard NPU (Neural Processing Unit) AI accelerator. Both systems have two 2.5 GbE network ports, and connectors for HDMI, Display, and USB, and are optimized for enterprise edge use cases. The SYS-E300 can also be expanded to feature a single PCIe 5.0 x16 slot, allowing for the installation of a PCIe GPU card, enabling the system to expand its performance for Edge AI applications in security & surveillance, retail, healthcare, manufacturing and more. In the edge data center, Supermicro's edge AI systems can now be installed with the recently launched Intel Xeon 6700/6500 series processor with P-cores. This processor group is designed for the enterprise data center, aiming for a strong balance between performance and efficiency and delivering an average 1.4x better performance than the previous generation across a wide range of enterprise workloads. Supermicro's 2U Edge AI product family, such as the SYS-212B-FLN2T, combines Intel's new processor with up to 6 single-width GPU accelerators in a short-depth, front I/O form factor that can be deployed at the enterprise edge as well as in telco and space-constrained environments. Supermicro at Embedded World Visit Supermicro at booth #208 in Hall #1 from March 11-13 to learn more about these new systems. About Super Micro Computer, Inc.Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first-to-market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions manufacturer with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next-generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Asia, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling). Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc. All other brands, names, and trademarks are the property of their respective owners.  

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 375 加入收藏 :
2025 年 3 月 28 日 (星期五) 農曆二月廿九日
首 頁 我的收藏 搜 尋 新聞發佈