關於 cookie 的說明

本網站使用瀏覽器紀錄 (Cookies) 來提供您最好的使用體驗,我們使用的 Cookie 也包括了第三方 Cookie。相關資訊請訪問我們的隱私權與 Cookie 政策。如果您選擇繼續瀏覽或關閉這個提示,便表示您已接受我們的網站使用條款。

搜尋結果Search Result

符合「GTC」新聞搜尋結果, 共 120 篇 ,以下為 49 - 72 篇 訂閱此列表,掌握最新動態
Manycore Tech Makes SpatialLM Open-source to Empower Embodied Intelligence Training

SAN JOSE, Calif., March 19, 2025 /PRNewswire/ -- Manycore Tech Inc., a fast-growing spatial intelligence company, announced that it has made its multimodal spatial comprehension model, SpatialLM, open-source at the GTC 2025, significantly lowering barriers for training embodied intelligence. Pipeline of Manycore Tech's SpatialLM Manycore Tech is the world's largest spatial design platform, with an average of 86.3 million monthly active visitors. It has served more than 45,500 enterprise customers across over 200 countries and regions. SpatialLM serves as a powerful model that teaches robots how to understand the surrounding 3D environment. It provides a basic training framework for practitioners in the embodied intelligence sector, allowing them to fine-tune the model to fit into specific scenarios as needed. Global developers can access the open-source SpatialLM on platforms, including Hugging Face, GitHub and ModelScope, to power their research and development of embodied intelligence. SpatialLM overcomes the limitations of traditional large language models to enhance the machines' ability of understanding real-world geometry and spatial relationships, much like humans do. It can automatically generate structured 3D scenes from videos, based on point cloud data extracted from those videos. It is able to accurately recognize and understand these scenes and translate them into 3D structural layouts. In the future, SpatialLM will be iterated to be able to handle a wider range of tasks, such as interacting with humans as an intelligent assistant and supporting embodied agents to perform complex tasks in challenging environments. "We hope to create an embodied intelligence training platform that can support the entire development circle from spatial cognition and analysis to machine-environment interactions," Victor Huang, Chairman of Manycore Tech, said in an interview. "By open-sourcing SpatialLM, we aim to facilitate foundational spatial cognition training for embodied intelligent robots. Meanwhile, we hope that SpatialVerse, the spatial intelligence solution released last year, can enable action and interaction training for robots in simulated environments leveraging synthetic data solutions." SpatialVerse, is another highlight for Manycore Tech at this year's GTC. It enables developers to train content-generation models in virtual settings and enhance cognitive capabilities of intelligent robots, AR/VR systems, and embodied AI. The platform, along with SpatialLM, provides a versatile system for digital simulations for developing embodied intelligence. For instance, the SpatialLM transforms real-world data into an abundance of digitally-structured scenes. Such scenes can then be generalized to create as many as trillions of new scenes using the synthetic data engine of SpatialVerse. That enriches the training data while maintaining a high quality of the dataset, which is key to embodied intelligence. "I believe we'll soon see explosive growth in embodied intelligence, driven by advancements in computing power, algorithms, engineering and training data. By open-sourcing SpatialLM, we hope to contribute to the development of foundational technologies, and help push the boundaries of AI and accelerate the arrival of the singularity." said Mr. Huang. Manycore Tech has reached cooperation agreements regarding spatial and embodied intelligence training with a number of global embodied intelligence companies, including some leading players in Silicon Valley. About Manycore Tech Inc. Founded in 2011 in Hangzhou, Manycore Tech Inc. is a spatial intelligence company powered by artificial intelligence (AI) technologies and purpose-built graphics processing unit (GPU) clusters. Manycore Tech offers a full suite of products, including the spatial design software Kujiale, its international version Coohom, and SpatialVerse, the next-generation spatial intelligence solution for AI development in indoor environments. As of 2024, Manycore Tech has amassed an average of 86.3 million monthly active visitors. It has served over 414,000 individual customers and 45,500 enterprise customers across more than 200 countries and regions. About SpatialVerse SpatialVerse is a next-generation spatial intelligence solution for AI development in indoor environments. At its core lies our massive, physically accurate dataset library specifically designed to train sophisticated models through realistic virtual simulations. Users can conduct industrial-scale simulations with multi-sensor compatibility and achieve high-fidelity RTX rendering aligned with NVIDIA Isaac Sim's OpenUSD framework. This technology, which bridges digital simulations and physical reality, accelerates AI development while reducing real-world testing costs.

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 498 加入收藏 :
ASUS Unveils the Latest ASUS AI POD Featuring NVIDIA GB300 NVL72

Major orders secured as ASUS expands leadership in AI infrastructure solutions SAN JOSE, Calif., March 19, 2025 /PRNewswire/ -- ASUS today joined GTC 2025 (Booth #1523) as a diamond sponsor to showcase the latest ASUS AI POD with the NVIDIA® GB300 NVL72 platform. The company is also proud to announce that it has already garnered substantial order placements, marking a significant milestone in the technology industry. ASUS XA NB3I-E12 features HGX B300 NVL16 delivers breakthrough performance to meet the evolving needs in every data center At the forefront of AI innovation, ASUS also presents the latest AI servers in the Blackwell and HGX™family line-up. These include ASUS XA NB3I-E12 powered by NVIDIA B300 NVL16, ASUS ESC NB8-E11 with NVIDIA DGX B200 8-GPU, ASUS ESC N8-E11V with NVIDIA HGX H200 and ASUS ESC8000A-E13P/ESC8000-E12P will support NVIDIA RTX PRO 6000 Blackwell Server Edition with MGX architecture. ASUS is positioned to provide comprehensive infrastructure solutions in combination with the NVIDIA AI Enterprise and NVIDIA Omniverse platforms, empowering clients to accelerate their time to market. ASUS AI POD with NVIDIA GB300 NVL72 By integrating the immense power of the NVIDIA GB300 NVL72 server platform, ASUS AI POD offers exceptional processing capabilities – empowering enterprises to tackle massive AI challenges with ease. Built with NVIDIA Blackwell Ultra, GB300 NVL72 leads the new era of AI with optimized compute, increased memory, and high-performance networking, delivering breakthrough performance. It's equipped with 72 NVIDIA Blackwell Ultra GPUs and 36 Grace CPUs in a rack-scale design delivering increased AI FLOPs, providing up to 40TB of high-speed memory per rack. It also includes networking platform integration with NVIDIA Quantum-X800 InfiniBand and Spectrum-X Ethernet, SXM7 and SOCAMM modules designed for serviceability, 100% liquid-cooled design and support for trillion-parameter LLM inference and training with NVIDIA. ASUS has shown expertise in building NVIDIA GB200 NVL72 infrastructure from the ground up. To achieve peak computing efficiency with software-defined storage architectural paradigm, also on show is ASUS RS501A-E12-RS12U. This powerful SDS server effectively reduces the latency of data training and inferencing, and complements NVIDIA GB200 NVL72. ASUS presents extensive service scope from hardware to cloud-based applications, covering architecture design, advanced cooling solutions, rack installation, large validation/deployment and AI platforms to significantly harness its extensive expertise to empower clients to achieve AI infrastructure excellence. Kaustubh Sanghani, vice president of GPU products at NVIDIA, commented: "NVIDIA is working with ASUS to drive the next wave of innovation in data centers. Leading ASUS servers combined with the Blackwell Ultra platform will accelerate training and inference, enabling enterprises to unlock new possibilities in areas such as AI reasoning and agentic AI." GPU servers for heavy generative AI workloads ASUS will also showcase a series of NVIDIA-certified servers, supporting applications and workflows built with the NVIDIA AI Enterprise and Omniverse platforms. ASUS 10U ESC NB8-E11 is equipped with the NVIDIA Blackwell HGX B200 8-GPU for unmatched AI performance. ASUS XA NB3I-E12 features HGX B300 NVL16, featuring increased AI FLOPS, 2.3TB of HBM3e memory, and networking platform integration with NVIDIA Quantum-X800 InfiniBand and Spectrum-X Ethernet, Blackwell Ultra delivers breakthrough performance for AI reasoning, agentic AI and video inference applications to meet the evolving needs in every data center. Finally, the 7U ASUS ESC N8-E11V dual-socket server is powered by eight NVIDIA H200 GPUs, supports both air-cooled and liquid-cooled options, and is engineered to provide effective cooling and innovative components. Scalable servers to master AI inference optimization ASUS also presents server and edge AI options for AI inferencing – the ASUS ESC8000 series embedded with the latest NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. ASUS ESC8000-E12P is a high-density 4U server for eight dual-slot high-end NVIDIA H200 GPUs and support software suite on NVIDIA AI Enterprise and Omniverse. Also, it's fully compatible with NVIDIA MGX architecture to ensure flexible scalability and fast, large-scale deployment. Additionally, the ASUS ESC8000A-E13P, 4U NVIDIA MGX server, supports eight dual-slot NVIDIA H200 GPUs, provides seamless integration, optimization and scalability for modern data centers and dynamic IT environments. Groundbreaking AI supercomputer, ASUS Ascent GX10 ASUS today also announces its groundbreaking AI supercomputer, ASUS Ascent GX10, in a compact package. Powered by the state-of-the-art NVIDIA GB10 Grace Blackwell Superchip, it delivers 1,000 AI TOPS performance, making it ideal for demanding workloads. Ascent GX10 is equipped with a Blackwell GPU, a 20-core Arm CPU, and 128GB of memory, supporting AI models with up to 200-billion parameters. This revolutionary device places the formidable capabilities of a petaflop-scale AI supercomputer directly onto the desks of developers, AI researchers and data scientists around the globe. ASUS IoT showcases its Edge AI computers at GTC, featuring the PE2100N with NVIDIA Jetson AGX OrinTM, delivering 275 TOPS for generative AI and robotics. The PE8000G supports dual 450W NVIDIA RTXTM GPUs, excelling in real-time perception AI. With rugged designs and wide operating temperature, both are ideal for computer vision, autonomous vehicles and intelligent video analytics. AVAILABILITY & PRICING ASUS AI infrastructure solutions and servers are available worldwide. Please contact your local ASUS representative for further information.

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 160 加入收藏 :
Supermicro Adds Portfolio for Next Wave of AI with NVIDIA Blackwell Ultra Solutions, Featuring NVIDIA HGX™ B300 NVL16 and GB300 NVL72

Air- and Liquid-Cooled Optimized Solutions with Enhanced AI FLOPs and HBM3e Capacity, with up to 800 Gb/s Direct-to-GPU Networking Performance SAN JOSE, Calif., March 19, 2025 /PRNewswire/ -- GTC 2025 Conference -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing new systems and rack solutions powered by the NVIDIA's Blackwell Ultra platform, featuring the NVIDIA HGX B300 NVL16 and NVIDIA GB300 NVL72 platforms. Supermicro and NVIDIA's new AI solutions strengthen leadership in AI by delivering breakthrough performance for the most compute-intensive AI workloads, including AI reasoning, agentic AI, and video inference applications. NVIDIA Supermicro AI Solutions B300 "At Supermicro, we are excited to continue our long-standing partnership with NVIDIA to bring the latest AI technology to market with the NVIDIA Blackwell Ultra Platforms," said Charles Liang, president and CEO, Supermicro. "Our Data Center Building Block Solutions® approach has streamlined the development of new air and liquid-cooled systems, optimized to the thermals and internal topology of the NVIDIA HGX B300 NVL16 and GB300 NVL72. Our advanced liquid-cooling solution delivers exceptional thermal efficiency, operating with 40℃ warm water in our 8-node rack configuration, or 35℃ warm water in double-density 16-node rack configuration, leveraging our latest CDUs. This innovative solution reduces power consumption by up to 40% while conserving water resources, providing both environmental and operational cost benefits for enterprise data centers." For more information, please visit https://www.supermicro.com/en/accelerators/nvidia  NVIDIA's Blackwell Ultra platform is built to conquer the most demanding cluster-scale AI applications by overcoming performance bottlenecks caused by limited GPU memory capacity and network bandwidth. NVIDIA Blackwell Ultra delivers an unprecedented 288GB HBM3e of memory per GPU, delivering drastic improvements in AI FLOPS for AI training and inference for the largest AI models. The networking platform integration with NVIDIA Quantum-X800 InfiniBand and Spectrum-X™ Ethernet doubles the compute fabric bandwidth, up to 800 Gb/s. Supermicro integrates NVIDIA Blackwell Ultra with two types of solutions: Supermicro NVIDIA HGX B300 NVL16 systems, designed for every data center, and the NVIDIA GB300 NVL72, equipped with NVIDIA's next-generation Grace Blackwell architecture. Supermicro NVIDIA HGX B300 NVL16 system Supermicro NVIDIA HGX systems are the industry-standard building blocks for AI training clusters, with an 8-GPU NVIDIA NVLink™ domain and 1:1 GPU-to-NIC ratio for high-performance clusters. Supermicro's new NVIDIA HGX B300 NVL16 system builds upon this proven architecture with thermal design advancements in both a liquid-cooled and air-cooled version. For B300 NVL16, Supermicro introduces a brand new 8U platform to maximize the output of the NVIDIA HGX B300 NVL16 board. Each GPU is connected in a 1.8TB/s 16-GPU NVLink domain providing a massive 2.3TB of HBM3e per system. Supermicro NVIDIA HGX B300 NVL16 improves upon performance in the network domain by integrating 8 NVIDIA ConnectX®-8 NICs directly into the baseboard to support 800 Gb/s node-to-node speeds via NVIDIA Quantum-X800 InfiniBand or Spectrum-X™ Ethernet. Supermicro NVIDIA GB300 NVL72 The NVIDIA GB300 NVL72 integrates 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace™ CPUs in a single rack with exascale computing capacity, featuring upgraded HBM3e memory capacity for over 20TB of HBM3e memory interconnected in a 1.8TB/s 72-GPU NVLink domain. NVIDIA ConnectX®-8 SuperNIC provides 800Gb/s speeds for both GPU-to-NIC and NIC-to-network communication, drastically improving cluster-level performance of the AI compute fabric. Liquid-Cooled AI Data Center Building Block Solutions Expertise in liquid cooling, data center deployment, and building block approach positions Supermicro to deliver NVIDIA Blackwell Ultra with industry-leading time-to-deployment. Supermicro offers a complete liquid cooling portfolio, including newly developed direct-to-chip cold plates, a 250kW in-rack CDU, and cooling tower. Supermicro's on-site rack deployment helps enterprises build data center from the ground up, including the planning, design, power-up, validation, testing, installation and configuration of racks, servers, switches and other networking equipment to meet the organization's specific needs. 8U Supermicro NVIDIA HGX B300 NVL16 system – Designed for every data center with a streamlined thermally-optimized chassis and 2.3TB HBM3e memory per system. NVIDIA GB300 NVL72 – Exascale AI supercomputer in a single rack with essentially double the HBM3e memory capacity and networking speeds over its predecessor. Supermicro at GTC 2025 GTC visitors can find Supermicro at San Jose, CA from March 17-21, 2025. Visit us at booth #1115 to see the X14/H14 B200, B300, and GB300 systems on display along with our rack-scaled liquid-cooled solutions. About Super Micro Computer, Inc. Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Asia, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).   Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc. All other brands, names, and trademarks are the property of their respective owners.  

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 335 加入收藏 :
Supermicro Expands Enterprise AI Portfolio of over 100 GPU-Optimized Systems Supporting the Upcoming NVIDIA RTX PRO 6000 Blackwell Server Edition and NVIDIA H200 NVL Platform

With a Broad Range of Form Factors, Supermicro's Expanded Portfolio of PCIe GPU Systems Can Scale to the Most Demanding Data Center Requirements, with up to 10 Double Width GPUs to Low Power Intelligent Edge Systems Providing Maximum Flexibility and Optimization for Enterprise AI LLM-Inference Workloads SAN JOSE, Calif., March 19, 2025 /PRNewswire/ -- GTC 2025 Conference – Supermicro, Inc. (NASDAQ: SMCI) a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, today announced support for the new NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPUs on a range of workload-optimized GPU servers and workstations. Specifically optimized for the NVIDIA Blackwell generation of PCIe GPUs, the broad range of Supermicro servers will enable more enterprises to leverage accelerated computing for LLM-inference and fine-tuning, agentic AI, visualization, graphics & rendering, and virtualization. Many Supermicro GPU-optimized systems are NVIDIA Certified, guaranteeing compatibility and support for NVIDIA AI Enterprise to simplify the process of developing and deploying production AI.  Supermicro GPU for Enterprise AI "Supermicro leads the industry with its broad portfolio of application optimized GPU servers that can be deployed in a wide range of enterprise environments with very short lead times," said Charles Liang, president and CEO of Supermicro. "Our support for the NVIDIA RTX PRO 6000 Blackwell Server Edition GPU adds yet another dimension of performance and flexibility for customers looking to deploy the latest in accelerated computing capabilities from the data center to the intelligent edge. Supermicro's broad range of PCIe GPU-optimized products also support NVIDIA H200 NVL in 2-way and 4-way NVIDIA NVLink™ configurations to maximize inference performance for today's state-of-the-art AI models, as well as accelerating HPC workloads." For more information, please visit https://www.supermicro.com/en/accelerators/nvidia/pcie-gpu. The NVIDIA RTX PRO 6000 Blackwell Server Edition is a universal GPU, optimized for both AI and graphics workloads. The new GPU features significantly enhanced performance compared to the prior generation NVIDIA L40S, including faster GDDR7 memory and 2x more memory capacity, PCIe 5.0 interface support to allow faster GPU-CPU communication, and new Multi-Instance GPU (MIG) capabilities to allow for sharing of a single GPU across up to 4 fully-isolated instances. In addition, Supermicro GPU-optimized systems are designed to also support NVIDIA SuperNICs such as NVIDIA BlueField®-3 and NVIDIA ConnectX®-8 for the best infrastructure scaling and GPU clustering with NVIDIA Quantum InfiniBand and NVIDIA Spectrum Ethernet. "The NVIDIA RTX PRO 6000 Blackwell Server Edition is the ultimate data center GPU for AI and visual computing, offering unprecedented acceleration for the most demanding workloads," said Bob Pette, Vice President of Enterprise Platforms at NVIDIA. "The NVIDIA RTX PRO 6000 Blackwell Server Edition expands Supermicro's broad lineup of NVIDIA-accelerated systems to speed virtually every workload across AI development and inference." In addition to the enterprise-grade NVIDIA RTX PRO 6000 Blackwell Server Edition, selected Supermicro workstations will also support the new NVIDIA RTX PRO 6000 Blackwell Workstation Edition and NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition, the most powerful professional-grade GPUs for AI processing and development, 3D rendering, media, and content creation workloads. Supermicro system families supporting the new GPUs include the following: NVIDIA RTX PRO 6000 Blackwell Server Edition 5U PCIe GPU – Highly flexible, thermally optimized architectures designed to support up to 10 GPUs in a single chassis with air cooling. Systems feature dual-socket CPUs and PCIe 5.0 expansion to facilitate high speed networking. Key workloads include AI inference and fine-tuning, 3D rendering, digital twin, scientific simulation, and cloud gaming. NVIDIA MGX™ - GPU-optimized systems based on NVIDIA modular reference design, supporting up to 4 GPUs in 2U or 8 GPUs in 4U to support industrial automation, scientific modeling, HPC, and AI inference applications. 3U Edge-optimized PCIe GPU – Compact form factor designed for edge data center deployments and supporting up to 8 double-width or 19 single-width GPUs per system. Key workloads include EDA, scientific modeling, and edge AI inferencing. SuperBlade® - Density-optimized and energy-efficient multi-node architecture designed for maximum rack density, with up to 120 GPUs per rack. Rackmount Workstation – Workstation performance and flexibility in a rackmount form factor, offering increased density and security for organizations looking to utilize centralized resources. NVIDIA RTX PRO 6000 Blackwell Workstation Edition and NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Tower Workstation – A range of desktop and under-desk form factors designed for AI, 3D media, and simulation applications, ideal for AI developers, creative studios, educational institutions, field offices, and laboratories. Supporting other currently available GPUs including H200/H100 NVL, L40S, L4, and more: 4U GPU-optimized – Up to 10 double-width GPUs with single root and dual root configurations available, as well as tower GPU servers supporting up to 4 double-width GPUs. 1U and 2U MGX™ - Compact GPU-optimized systems based on NVIDIA's modular reference design with up to 4 double-width GPUs. 1U and 2U rackmount platforms – Flagship-performance Hyper and Hyper-E, and Cloud Data Center optimized CloudDC supporting up to 4 double-width or 8 single-width GPUs. Multi-processor – 4 and 8-socket architectures designed for maximum memory and I/O density with up to 2 double-width GPUs in 2U or 12 double-width GPUs in 6U. Edge – Compact edge box PCs supporting 1 double-width GPU or 2 single-width GPUs. About Super Micro Computer, Inc.Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first-to-market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions manufacturer with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next-generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Asia, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling). Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc. All other brands, names, and trademarks are the property of their respective owners.  

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 292 加入收藏 :
Cisco to Deliver Secure AI Infrastructure with NVIDIA

Cisco Secure AI Factory with NVIDIA breaks new ground in AI infrastructure and security while accelerating and simplifying enterprise AI adoption News Summary: Offering will empower customers to build and secure data centers to develop and run AI workloads. The Cisco Secure AI Factory with NVIDIA will embed security within all layers, from the application, to the workload, to the infrastructure using solutions like Cisco AI Defense and Hybrid Mesh Firewall. SAN JOSE, Calif., March 19, 2025 /PRNewswire/ -- GTC -- Cisco [NASDAQ: CSCO] today unveiled an AI factory architecture with NVIDIA that puts security at its core. This collaboration with NVIDIA builds on the expanded partnership that was announced last month, and the companies have moved swiftly to provide validated reference architectures today. Together, the companies are developing the Cisco Secure AI Factory with NVIDIA to dramatically simplify how enterprises deploy, manage, and secure AI infrastructure at any scale. Cisco and NVIDIA "AI can unlock groundbreaking opportunities for the enterprise," said Chuck Robbins, Chair and CEO, Cisco. "To achieve this, the integration of networking and security is essential. Cisco and NVIDIA's trusted, innovative solutions empower our customers to harness AI's full potential simply and securely." "AI factories are transforming every industry, and security must be built into every layer to protect data, applications and infrastructure," said Jensen Huang, founder and CEO, NVIDIA. "Together, NVIDIA and Cisco are creating the blueprint for secure AI—giving enterprises the foundation they need to confidently scale AI while safeguarding their most valuable assets." Developing and delivering AI applications require high performing, scalable infrastructure and AI software tool chain. Securing this infrastructure and AI software requires a new architecture – one that embeds security at all layers of the AI stack and automatically expands and adapts as the underlying infrastructure changes. Cisco and NVIDIA's partnership on the NVIDIA Spectrum-XTM Ethernet networking platform provides the foundation for the Cisco Secure AI Factory with NVIDIA. Cisco is integrating security solutions like Cisco Hypershield, to help protect AI workloads, and Cisco AI Defense, to help protect the development, deployment, and use of AI models and applications. Together, Cisco and NVIDIA will provide customers with the flexibility to design infrastructure for their specific AI needs without sacrificing operational simplicity or security. Building a Secure AI FactoryAI factories – data centers purpose-built to power AI workloads – are designed to be more modular, scalable and agile, but organizations must also look beyond raw compute power. AI Factories must address new and complex security challenges. The recently published Cisco State of AI Security report analyzes dozens of AI-specific threat vectors and over 700 pieces of AI-related legislation to highlight key developments from a rapidly evolving AI security landscape. Organizations that strategically address both their AI infrastructure and security challenges simultaneously will be more agile, scale faster, and derive business value quicker. Cisco Secure AI Factory with NVIDIA is expected to build on the companies' unique ability to offer flexible AI networking and full-stack technology options that leverage the planned joint architecture. The partnership will bring together technologies from Cisco, NVIDIA, and our ecosystem partners into a secure AI factory architecture for enterprise customers, including:   Compute: Cisco UCS AI servers based on NVIDIA HGX and NVIDIA MGX for accelerated computing. Networking: Cisco Nexus Hyperfabric AI and Nexus networking solutions, powered by Silicon One and NVIDIA Spectrum-X Ethernet networking. Storage: High-performance storage from certified partners Pure Storage, Hitachi Vantara, NetApp, and VAST Data. Software: NVIDIA AI Enterprise software platform to streamline the development and deployment of production-grade agentic AI workloads. The Cisco Secure AI Factory with NVIDIA includes security at all layers: Securing the infrastructure: Cisco Hybrid Mesh Firewall provides unified security management and consistent policy across multiple enforcement points, including network switches, traditional firewalls, and workload agents. This integrated approach ensures pervasive and consistent security, ranging from deep packet inspection to wide infrastructure coverage, detecting, blocking and containing adversaries. Cisco Hypershield (part of Hybrid Mesh Firewall) will, in the future, extend pervasive, zero-trust security enforcement to every AI node by integrating with NVIDIA BlueField-3 DPUs. Securing the Workload: Cisco Hypershield prevents adversary lateral movement and proactive vulnerability mitigation without the need for patching, all from a single management interface. By monitoring and controlling process executions, file access, and network activities, Hypershield delivers deep visibility and surgical runtime enforcement within AI workloads. Future enhancements will further strengthen workload protection through integration with NVIDIA BlueField-3's DOCA AppShield for real-time workload threat detection in AI-focused virtual machines and containers. Securing the AI application: Cisco AI Defense empowers security and AI teams with comprehensive tools to protect AI applications from safety (e.g. off-policy, toxic behavior) and security (i.e. prompt injection, data privacy) risks across the development lifecycle. AI Defense integrates into existing CI/CD workflows to provide automated vulnerability testing and a common layer of runtime security across any number of models and applications. Additionally, AI Defense helps companies align to AI security standards with a single integration, including NIST, MITRE ATLAS, and OWASP LLM Top 10. Future enhancements include integration with NVIDIA AI Enterprise to streamline AI security workflows. Cisco and NVIDIA each bring a unique understanding of customer AI infrastructure needs, and by combining their insights, can offer flexible deployment models alongside proven reference architectures. The Secure AI Factory will provide enterprise customers with scalable, high-performance AI infrastructure that supports customers at any stage of their journey and embeds security throughout. Cisco Secure AI Factory with NVIDIA will have flexible deployment options, including: Ready-to-deploy: Utilizing Cisco Nexus Hyperfabric AI along with Cisco's security portfolio and NVIDIA technology, customers can deploy a vertically integrated AI solution that automates and simplifies the secure AI factory lifecycle from design to deployment and ongoing monitoring. Build-your-own: Featuring customizable modular components from Cisco, NVIDIA, and the companies' storage ecosystem partners, customers can incorporate their current infrastructure and build solutions that are designed precisely for their unique environments. "In today's fast-moving market, businesses need more than just technology—they need end-to-end solutions that address their most pressing challenges. I see Cisco and NVIDIA combining their strengths to deliver integrated solutions that I believe will drive innovation, simplify deployment, and streamline operations," said Patrick Moorhead, Founder, CEO and Chief Analyst, Moor Insights & Strategy. "AI isn't easy but the combination of the two could be an 'easy button' for AI infrastructure. By making AI infrastructure easier to adopt and manage, they could empower enterprises to accelerate digital transformation and achieve their strategic goals with more confidence." Cisco and NVIDIA: The journey to a validated and unified architectureMoving quickly is crucial to meet today's demand for AI infrastructure, and Cisco and NVIDIA have made progress as part of the collaboration announced in February 2025. Cisco has developed new reference architectures with deployment options for Cisco Nexus Hyperfabric AI or Cisco Nexus 9000 Series Switches validated and based on the NVIDIA Enterprise Reference Architecture for HGX H200 and Spectrum-X. AVAILABILITYSolutions based on the Cisco Secure AI Factory with NVIDIA architecture are expected to be available for purchase before the end of calendar year 2025. Many of the individual technology components included in the architecture are available today. ADDITIONAL RESOURCES Executive Blog Post: Embracing the AI Era: Cisco Secure AI Factory with NVIDIA, Jeetu Patel, Cisco's Executive Vice President and Chief Product Officer For more information on the Cisco Secure AI Factory with NVIDIA, click here. ABOUT CISCOCisco (NASDAQ: CSCO) is the worldwide technology leader that is revolutionizing the way organizations connect and protect in the AI era. For more than 40 years, Cisco has securely connected the world. With its industry leading AI-powered solutions and services, Cisco enables its customers, partners and communities to unlock innovation, enhance productivity and strengthen digital resilience.  With purpose at its core, Cisco remains committed to creating a more connected and inclusive future for all. Discover more on The Newsroom and follow us on X at @Cisco. Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. A listing of Cisco's trademarks can be found at http://www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word 'partner' does not imply a partnership relationship between Cisco and any other company. Products and features described in this release that are not currently available remain in varying stages of development and will be offered on a when-and-if-available basis. The delivery timeline of any future products and features is subject to change at the discretion of Cisco and its partners.  

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 298 加入收藏 :
Pantheon Lab Introduces Enhanced Digital Human with Emotional Intelligence

Transforming Human-Machine Interaction From Functional to Emotionally Aware Interfaces HONG KONG, March 19, 2025 /PRNewswire/ -- Pantheon Lab, a leader in digital human and agentic AI technologies and a proud member of the NVIDIA Inception program, today launches its latest Metahuman Interface (MHI) featuring advanced emotional intelligence. The Metahuman Interface builds on the company's expertise in creating lifelike digital humans, now enhanced with the ability to detect, interpret, and respond to human emotions in real time. Pantheon Lab's MHI transforms traditional, static touchpoints—such as ordering kiosks - into dynamic, voice-enabled interfaces that engage users with human-like empathy and understanding. Attendees at NVIDIA GTC 2025 can experience this technology at Booth 3004 at the San Jose McEnery Convention Center from March 18 to 21 where Pantheon Lab will showcase live demonstrations of its emotionally intelligent digital humans. Pantheon Lab's mission is to integrate Agentic AI - intelligent systems capable of autonomous, goal-driven actions - into everyday life. By combining advanced AI, Large Language Model (LLM), and emotional intelligence, the new solution enables machines to deliver more personalized and engaging interactions across industries, including retail, healthcare, education, and public services. "We're committed to advancing human-machine interaction by making it not only functional but also emotionally intelligent and deeply human," said Ivan Lau, Co-Founder & CEO of Pantheon Lab. "The Metahuman Interface represents a significant step forward in our journey to integrate Agentic AI into everyday life, empowering businesses to connect with people on a deeper meaningful level. As part of our 'AI for Good' initiative, we focus on creating AI solutions that are ethical, inclusive, and transformative - enhancing industries from customer service and education to healthcare and public services, ensuring technology works for humanity." Key Features of the Upgraded Metahuman Interface: Emotional Intelligence: Detects and responds to human emotions in real time, creating empathetic and tailored interactions. Voice-Driven Interface: Allows users to interact naturally through voice, eliminating the need for touchscreens or buttons. Human-Like Appearance and Behavior: Looks, sounds, and behaves like a real person, fostering trust and engagement. Scalability: Adapts across various industries, from retail and hospitality to healthcare and customer service, both online and offline. Pantheon Lab works with customers such as Toyota, KFC Taiwan, Hong Kong Airport Authority, SBS Transit (Singapore), and National Gallery Singapore. Pantheon Lab invites attendees to experience MHI at Booth 3004 at the NVIDIA GTC event to explore the capabilities of its Metahuman Interface, which is now available for selected industries and use cases. For more information, visit www.PantheonLab.ai Here’s one of Pantheon Lab’s digital humans, powered by Emotional Intelligence and Agentic AI, proactively sensing patient concerns and autonomously scheduling appointments - delivering smoother, more empathetic healthcare interactions. Demo Video 1: Experience the next evolution of AI with Pantheon Lab's Digital Human powered by Emotional Intelligence & Agentic AI. Watch how our AI-powered digital human seamlessly assists users in real-world scenarios, from navigating public transport to scheduling healthcare appointment, all with human-like intuition and empathy. See the future of AI in action. Demo Video 2: Experience Pantheon Lab's platform effortlessly transitioning between digital humans in real-time, enabling continuous, engaging interactions across diverse use cases. About Pantheon Lab Pantheon Lab is a leading innovator in digital human and agentic AI technologies, shaping the future of human-machine interaction. By integrating cutting-edge AI with emotional intelligence, Pantheon Lab empowers businesses to deliver hyper-personalized, human-like experiences across all touchpoints. As a proud member of the NVIDIA Inception program, Pantheon Lab leverages NVIDIA's world-class technologies to drive innovation and scalability. Committed to the "AI for Good" initiative, Pantheon Lab blends global innovation with local relevance - GLOCAL, ensuring its solutions create meaningful impact and resonate across different markets. For more information, visit www.pantheonlab.ai.

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 233 加入收藏 :
2025 年 4 月 8 日 (星期二) 農曆三月十一日
首 頁 我的收藏 搜 尋 新聞發佈