關於 cookie 的說明

本網站使用瀏覽器紀錄 (Cookies) 來提供您最好的使用體驗,我們使用的 Cookie 也包括了第三方 Cookie。相關資訊請訪問我們的隱私權與 Cookie 政策。如果您選擇繼續瀏覽或關閉這個提示,便表示您已接受我們的網站使用條款。

搜尋結果Search Result

符合「GPU」新聞搜尋結果, 共 746 篇 ,以下為 121 - 144 篇 訂閱此列表,掌握最新動態
Supermicro為AI新浪潮擴大產品組合,推出NVIDIA Blackwell Ultra解決方案,可搭載NVIDIA HGX™ B300 NVL16與GB300 NVL72

Supermicro, Inc. (NASDAQ:SMCI)作為AI、雲端、儲存和5G/邊緣領域的全方位IT解決方案供應商,宣布推出搭載NVIDIA Blackwell Ultra平台的全新系統和機架解決方案,這些系統和解決方案可支援NVIDIA HGX B300 NVL16與NVIDIA GB300 NVL72平台。Supermicro與NVIDIA的新型AI解決方案強化了AI領域內的引領優勢,並提供突破性效能,以因應運算密集度最高的AI工作負載,包括AI推理、代理式AI,以及影片推論應用。 Supermicro總裁暨執行長梁見後表示:「Supermicro很高興與NVIDIA共同持續長期合作,並透過NVIDIA Blackwell Ultra平台為市場帶來最新AI技術。我們的資料中心建構組件解決方案(Data Center Building Block Solutions®)簡化了新一代氣冷及液冷系統開發模式,可為NVIDIA HGX B300 NVL16和GB300 NVL72實現散熱與內部拓撲的最佳化。我們的先進液冷解決方案提供卓越的散熱效率,可使8節點機架配置能搭配40℃溫水運行,而雙倍密度16節點機架配置則可使用35℃溫水,充分發揮我們最新冷卻液分配單元(CDU)的優勢。這項創新解決方案可降低最多40%的能耗,同時節約水資源,為企業級資料中心帶來環境和營運成本方面的優越效益。」 欲了解更多資訊,請瀏覽:https://www.supermicro.com/en/accelerators/nvidia  NVIDIA Blackwell Ultra平台可解決因GPU記憶體容量和網路頻寬限制產生的效能瓶頸,適用於最嚴苛的叢集規模AI應用。NVIDIA Blackwell Ultra每顆GPU具有空前的288GB HBM3e記憶體量,能為最大型AI模型的AI訓練和推論大幅強化AI FLOPS效能。與NVIDIA Quantum-X800 InfiniBand與Spectrum-X™乙太網路平台進行整合後可使運算結構頻寬加倍,最高可達800 Gb/s。 Supermicro將NVIDIA Blackwell Ultra整合至兩種解決方案:適用於各種資料中心的 Supermicro NVIDIA HGX B300 NVL16系統,以及具有新一代NVIDIA Grace Blackwell架構的NVIDIA GB300 NVL72。 Supermicro NVIDIA HGX B300 NVL16系統 Supermicro NVIDIA HGX系統是業界標準型AI訓練叢集建構組件,配備能讓8顆GPU互連的NVIDIA NVLink™域,並具有1:1的GPU對網卡比例配置,可適用於高效能運算叢集。全新Supermicro NVIDIA HGX B300 NVL16系統亦基於該項經驗證的架構所開發,具有液冷和氣冷優化式散熱設計的兩類機型。 Supermicro為B300 NVL16推出全新8U平台,可最大化NVIDIA HGX B300 NVL16基板的輸出效能。每組GPU皆透過1.8TB/s 16-GPU NVLink域互連,使每系統可提供2.3TB的高HBM3e容量。Supermicro NVIDIA HGX B300 NVL16將8組NVIDIA ConnectX®-8 NIC直接整合至基板,大幅提升網域效能,並可透過NVIDIA Quantum-X800 InfiniBand或Spectrum-X™ 乙太網路支援800 Gb/s的節點間傳輸速度。 Supermicro NVIDIA GB300 NVL72 NVIDIA GB300 NVL72可在單一機架內搭載72顆NVIDIA Blackwell Ultra GPU和36顆NVIDIA Grace™ CPU,提供百萬兆級運算能力,並具有經升級且超過20 TB的HBM3e記憶體,可透過1.8TB/s的72-GPU NVLink域進行互連。NVIDIA ConnectX®-8 SuperNIC則提供800Gb/s傳輸速度,可用於GPU對網卡,以及網卡對網路間的通訊,能大幅提高AI運算結構的叢集級效能。 液冷AI資料中心建構組件解決方案 Supermicro透過液冷散熱、資料中心部署和建構組件技術的專業優勢,以領先業界的部署速度提供NVIDIA Blackwell Ultra。Supermicro也具有完整的液冷產品組合,包括新開發的直達晶片液冷板、250kW機架內建式冷卻液分配單元,以及冷卻水塔。  Supermicro 的現場機架部署服務可協助企業從零開始打造資料中心,包括規劃、設計、啟動上線、驗證、測試、安裝、機架配置、伺服器、交換器及其他網路設備,以滿足不同的企業需求。 8U Supermicro NVIDIA HGX B300 NVL16系統 — 專為各種資料中心設計,採用改良式散熱最佳化型機箱,且每系統具有2.3TB HBM3e記憶體。 NVIDIA GB300 NVL72 — 單一機架式百萬兆級AI超級電腦,其HBM3e記憶體容量和網路速度比較早的機型高出幾乎一倍。

文章來源 : 香港商霍夫曼公關顧問股份有限公司 發表時間 : 瀏覽次數 : 4058 加入收藏 :
ARBOR Technology Presents Innovative Edge AI Computing at Japan IT Week Spring 2025

TOKYO, March 26, 2025 /PRNewswire/ -- ARBOR Technology, a specialist in industrial automation solutions, is excited to announce its participation in Japan IT Week Spring 2025, where it will unveil cutting-edge technologies, including award-winning innovations and AI-driven solutions leveraging the latest platform advancements. Robust and High-performance Industrial Computing Solutions NUC Tiny, Ultra Mighty: The IEC-3714, featuring 34 TOPS of AI computing power driven by Intel® Core™ Ultra processors and Intel® Arc™ Graphics, integrates CPU, GPU, and NPU capabilities, offering unparalleled performance and efficiency in a compact design. ARBOR is showcasing its application in smart retail, where data collected by cameras is compared using AI models, including age, gender, and object recognition. It enables further integration with inventory management systems, thus allowing for real-time decisions on promotions and restocking based on weather, season, and other factors. Edge AI, Instant Decisions: Designed for edge AI, the FPC-5211 excels at handling the demanding computational requirements of modern smart city applications. This capability allows it to not only process large volumes of data from various sensors. By performing AI inference at the edge, the FPC-5211 minimizes latency and enables real-time decision-making. The innovative FPC-5211 has received the prestigious Best in Show award from Embedded Computing Design at Embedded World 2025 in Nuremberg, Germany. Enhanced Accuracy, More Realistic Generation: The AEC-6000 Series, powered by NVIDIA Jetson AGX Orin, delivers exceptional AI computing performance, enabling smooth execution of complex VLM models for real-time, high-efficiency image and language processing. It features edge computing capabilities, reducing latency and enhancing security. This is crucial for VLM applications requiring immediate responses, such as autonomous driving, smart factories, and drones. A Well-designed Stackable and Compact Design: The ARES-1983H series, featuring Din-rail and EzIO modular design, offers exceptional flexibility and customization. ARBOR Technology's EzIO enables rapid, tailored I/O solutions for diverse industrial needs. This series has been recognized as the Product of the Year by Industrial Production. "At Japan IT Week 2025, ARBOR will demonstrate how our cutting-edge technologies enable businesses to maintain a competitive edge," stated Ivan Huang, Vice President of APAC at ARBOR Technology. "Visit our booth to explore how ARBOR's innovative products can optimize your projects and propel your business growth." Japan IT Week 2025Tokyo Big Sight, JapanDate: Apr. 23-25 | Booth: Hall East, 18-25arbor-technology.com Contact:Rex PanAssistant Marketing Managerrexpan@arbor.com.tw https://arbor-technology.com   

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 324 加入收藏 :
realme 14 Series 5G Debuts World's First Snapdragon 6 Gen 4, Arriving on 15 April

KUALA LUMPUR, Malaysia, March 25, 2025 /PRNewswire/ -- realme, a tech brand that better understands young users, is set to launch the highly-anticipated realme 14 Series 5G on Tuesday, 15 April 2025 at Centre Court, IOI City Mall. As the revolutionary Midrange Performance Benchmark, realme 14 5G is the world's first smartphone powered by the Snapdragon 6 Gen 4, delivering unrivaled gaming performance that exceeds expectations—staying true to the realme Number Series' mission of setting the midrange performance standard. realme 14 Series 5G Debuts World’s First Snapdragon 6 Gen 4, Arriving on 15 April The Mecha-designed realme 14 5G takes power from the world's first Snapdragon 6 Gen 4, achieving an industry-leading AnTuTu benchmark score of over 780,000 – the highest in its segment. The 4nm chipset delivers a 15% boost in CPU performance and a 35% improvement in CPU and GPU energy efficiency and performance. Gamers can look forward to lag-free gameplay with a 120FPS setting in major titles such as Mobile Legends: Bang Bang, Honor of Kings, and Free Fire. In a 90FPS setting, the performance-oriented smartphone can maintain a stable frame rate for more than 10 hours. With realme 14 5G, gamers can expect ultra-smooth gameplay, ultra-responsive controls, and peak efficiency like never before. The grand launch of realme 14 Series 5G will be streamed live on realme's official Facebook, YouTube, and TikTok. About realme: realme is a global consumer technology company disrupting the smartphone market by making cutting-edge technologies more accessible. It provides a range of smartphones and lifestyle technology devices with premium specs, quality, and trend-setting designs to young consumers. Established by Sky Li in 2018, realme has become one of the top 5 smartphone players in 30 markets globally in just three years, and realme has entered multiple markets worldwide, including China and Southeast Asia, South Asia, Europe, the Middle East, Latin America, and Africa, and has a global user base of over 200 million. 2024 is the year of rebranding for realme with its new slogan, "Make it real." Under the new brand spirit, realme will focus more on young users than before and bring real, clear, and tangible benefits to their lives. For more information, please visit https://www.realme.com/my/.

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 333 加入收藏 :
Supermicro為AI新浪潮擴大產品組合,推出NVIDIA Blackwell Ultra解決方案,可搭載NVIDIA HGX™ B300 NVL16與GB300 NVL72

此氣冷與液冷最佳化解決方案提供更高的AI FLOPS及HBM3e記憶體容量,並支援最高800 Gb/s Direct-to-GPU連接效能 加州聖荷西2025年3月25日 /美通社/ -- Supermicro, Inc. (NASDAQ:SMCI)作為AI、雲端、儲存和5G/邊緣領域的全方位IT解決方案供應商,宣布推出搭載NVIDIA Blackwell Ultra平台的全新系統和機架解決方案,這些系統和解決方案可支援NVIDIA HGX B300 NVL16與NVIDIA GB300 NVL72平台。Supermicro與NVIDIA的新型AI解決方案強化了AI領域內的引領優勢,並提供突破性效能,以因應運算密集度最高的AI工作負載,包括AI推理、代理式AI,以及影片推論應用。 Supermicro總裁暨執行長梁見後表示:「Supermicro很高興與NVIDIA共同持續長期合作,並透過NVIDIA Blackwell Ultra平台為市場帶來最新AI技術。我們的資料中心建構組件解決方案(Data Center Building Block Solutions®)簡化了新一代氣冷及液冷系統開發模式,可為NVIDIA HGX B300 NVL16和GB300 NVL72實現散熱與內部拓撲的最佳化。我們的先進液冷解決方案提供卓越的散熱效率,可使8節點機架配置能搭配40℃溫水運行,而雙倍密度16節點機架配置則可使用35℃溫水,充分發揮我們最新冷卻液分配單元(CDU)的優勢。這項創新解決方案可降低最多40%的能耗,同時節約水資源,為企業級資料中心帶來環境和營運成本方面的優越效益。」 欲了解更多資訊,請瀏覽:https://www.supermicro.com/en/accelerators/nvidia NVIDIA Blackwell Ultra平台可解決因GPU記憶體容量和網路頻寬限制產生的效能瓶頸,適用於最嚴苛的叢集規模AI應用。NVIDIA Blackwell Ultra每顆GPU具有空前的288GB HBM3e記憶體量,能為最大型AI模型的AI訓練和推論大幅強化AI FLOPS效能。與NVIDIA Quantum-X800 InfiniBand與Spectrum-X™乙太網路平台進行整合後可使運算結構頻寬加倍,最高可達800 Gb/s。 Supermicro將NVIDIA Blackwell Ultra整合至兩種解決方案:適用於各種資料中心的 Supermicro NVIDIA HGX B300 NVL16系統,以及具有新一代NVIDIA Grace Blackwell架構的NVIDIA GB300 NVL72。 Supermicro NVIDIA HGX B300 NVL16系統 Supermicro NVIDIA HGX系統是業界標準型AI訓練叢集建構組件,配備能讓8顆GPU互連的NVIDIA NVLink™域,並具有1:1的GPU對網卡比例配置,可適用於高效能運算叢集。全新Supermicro NVIDIA HGX B300 NVL16系統亦基於該項經驗證的架構所開發,具有液冷和氣冷優化式散熱設計的兩類機型。 Supermicro為B300 NVL16推出全新8U平台,可最大化NVIDIA HGX B300 NVL16基板的輸出效能。每組GPU皆透過1.8TB/s 16-GPU NVLink域互連,使每系統可提供2.3TB的高HBM3e容量。Supermicro NVIDIA HGX B300 NVL16將8組NVIDIA ConnectX®-8 NIC直接整合至基板,大幅提升網域效能,並可透過NVIDIA Quantum-X800 InfiniBand或Spectrum-X™ 乙太網路支援800 Gb/s的節點間傳輸速度。 NVIDIA Supermicro AI Solutions B300 Supermicro NVIDIA GB300 NVL72 NVIDIA GB300 NVL72可在單一機架內搭載72顆NVIDIA Blackwell Ultra GPU和36顆NVIDIA Grace™ CPU,提供百萬兆級運算能力,並具有經升級且超過20 TB的HBM3e記憶體,可透過1.8TB/s的72-GPU NVLink域進行互連。NVIDIA ConnectX®-8 SuperNIC則提供800Gb/s傳輸速度,可用於GPU對網卡,以及網卡對網路間的通訊,能大幅提高AI運算結構的叢集級效能。 液冷AI資料中心建構組件解決方案 Supermicro透過液冷散熱、資料中心部署和建構組件技術的專業優勢,以領先業界的部署速度提供NVIDIA Blackwell Ultra。Supermicro也具有完整的液冷產品組合,包括新開發的直達晶片液冷板、250kW機架內建式冷卻液分配單元,以及冷卻水塔。 Supermicro 的現場機架部署服務可協助企業從零開始打造資料中心,包括規劃、設計、啟動上線、驗證、測試、安裝、機架配置、伺服器、交換器及其他網路設備,以滿足不同的企業需求。 8U Supermicro NVIDIA HGX B300 NVL16系統 — 專為各種資料中心設計,採用改良式散熱最佳化型機箱,且每系統具有2.3TB HBM3e記憶體。 NVIDIA GB300 NVL72 — 單一機架式百萬兆級AI超級電腦,其HBM3e記憶體容量和網路速度比較早的機型高出幾乎一倍。 關於Super Micro Computer, Inc. Supermicro(納斯達克股票代碼:SMCI)為應用最佳化全方位IT解決方案的全球領導者。Supermicro的成立據點及營運中心位於美國加州聖荷西,致力為企業、雲端、AI和5G電信/邊緣IT基礎架構提供領先市場的創新技術。我們是全方位IT解決方案製造商,提供伺服器、AI、儲存、物聯網、交換器系統、軟體及支援服務。Supermicro的主機板、電源和機箱設計專業進一步推動了我們的發展與產品生產,為全球客戶實現了從雲端到邊緣的下一代創新。我們的產品皆由企業內部團隊設計及製造(在美國、亞洲及荷蘭),透過全球化營運實現極佳的規模與效率,並藉營運最佳化降低總體擁有成本(TCO),以及經由綠色運算技術減少環境衝擊。屢獲殊榮的Server Building Block Solutions®產品組合,讓客戶可以自由選擇這些具高度彈性、可重複使用且極為多元的建構式組合系統,我們支援各種外形尺寸、處理器、記憶體、GPU、儲存、網路、電源和散熱解決方案(空調、自然氣冷或液冷),因此能為客戶的工作負載與應用提供最佳的效能。 Supermicro、Server Building Block Solutions和We Keep IT Green皆為Super Micro Computer, Inc. 的商標和/或註冊商標。 所有其他品牌、名稱和商標皆為其各自所有者之財產。

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 537 加入收藏 :
【AI戰情室5】研揚將AI引入ICU重症加護病房,準確監控患者病況

【臺北訊】專業物聯網及人工智慧邊緣運算平臺研發製造大廠—研揚科技 (股票代碼: 6579)與美國系統整合商合作,利用人工智慧應用在重症加護病房(ICU)中監控患者的病情及生命體徵。研揚的邊緣AI產品: MIX-Q670AI與Intel® Arc™ A750E GPU架構的顯卡: GAR-A750E搭配,使用在這次的智慧醫療應用上,加快ICU工作人員需要介入的情況判斷,也讓患者的生命安全多一層的保障。   智慧醫療領域Edge AI應用越來越廣泛,從高精細的影像推理模型的診斷到執行日常任務的自主服務機器人,大量資料需要傳輸與分析,所以都需要的高運算能力來支援深度學習演算法。其次,在ICU裡,病患身上要連接許多的儀器,如心電圖(ECG)監視器、血氧儀和溫度感測器等,安全無虞的無線傳輸是必要的。這次客戶選用研揚的MIX-Q670A1加上GAR-A750E是因為其高效能的運算速度及透過使用Intel® Distribution of OpenVINO™工具包,可將AI模型轉換並優化為適用於該應用程式,這使得將軟體和硬體結合在一起的過程更加簡化。除此之外,客戶希望使用醫療物聯網(IoT)設備,通過無線方式傳輸所需資訊。要穩定地把大量資料傳輸到加護病房監控站裡,強調高效能和傳輸安全性的研揚邊緣AI板卡是最大功臣。   研揚的MIX-Q670A1主機板相容12代、13代和14代Intel® Core™處理器,並支援最高64GB的雙通道DDR5記憶體,客戶選擇了Intel® Core™ i7-13700處理器,這不僅為資料整合、系統協調和UI渲染等來符合高功能需求任務,還透過CPU的性能混合架構高效分配了後臺進程的計算資源。這個設計幫助保持了可管理的平衡,確保主機板專用CPU風扇的散熱能力沒有被超出。另一個原因是,ICU監測站所需的AI推理和複雜資料分析任務,則由Intel® Arc™ A750E GPU驅動的GAR-A750E來處理。MIX-Q670A1的16通道PCIe Gen 5插槽安裝,GAR-A750E的目的是加速AI推理,並通過Intel Deep Link集成作為CPU和GPU之間的橋樑。這種配置確保了計算和AI負載的最有效分配,最大限度地減少延遲。GAR-A750E的28個Xe核心和448個Intel® XMX引擎,使其能夠執行大規模的矩陣乘法和其他神經網路推理任務。這種能力使得可以並存執行系統接收到的患者資料推理任務,並從而説明檢測異常情況。   「為了患者監測資料的採集,美國當地系統整合商開發了醫療物聯網設備,支援Wi-Fi和藍芽模組。為了保護資料傳輸過程中的安全,MIX-Q670A1配備了內置的TPM 2.0晶片,並使用支援WPA3(Wi-Fi保護訪問3)加密協定的Wi-Fi模組。同時支援Intel® AES新指令(AES-NI)、Intel®硬體保護盾和Intel®全記憶體加密(TME)功能。並利用雙HDMI 2.0顯示埠來將複雜的醫療資料,顯示在容易理解的格式顯示在儀錶板上,來讓護理人員便於操作及不錯過任何重要資訊。」研揚科技工業電腦產品處產品經理洪宜妘表示。 更多MIX-Q670A1及GAR-A750E產品資訊,請上研揚官網www.aaeon.com 查詢。或與研揚科技國內業務處02-89191234分機1142劉小姐聯繫。有小量需求的客戶,可透過eShop購買。   關於研揚科技 研揚科技集團(研揚)是台灣專業物聯智能解決方案研發製造大廠,成立於1992年。研發製造並行銷全球IoT及AI邊緣運算解決方案,另有嵌入式電腦主板及系統、工業液晶顯示器、強固型平板電腦、工控機、網路安全設備以及相關配件等,提供OEM/ODM客戶及系統整合商完整且專業之軟硬體解決方案。同時,研揚科技有專屬團隊提供客製化服務,協助您從研發初期發想到產品製作、量產到售後服務,提供一貫之專業諮詢與服務,為您量身打造高品質產品。研揚科技目前提供多款AI邊緣運算產品及智慧城市、智慧零售及智慧製造等系統整合和解決方案。研揚是英特爾鈦金級會員,同時也是NVIDIA的菁英級夥伴(Elite partner)。欲瞭解更多詳細資訊請參考研揚科技官方網站  

文章來源 : AAEON 發表時間 : 瀏覽次數 : 4291 加入收藏 :
Cognizant to Deploy Neuro AI Platform to Accelerate Enterprise AI Adoption in Collaboration with NVIDIA

Cognizant will offer solutions across key growth areas, including enterprise AI agents, tailored industry large language models and infrastructure with NVIDIA AI. TEANECK, N.J., March 25, 2025 /PRNewswire/ -- Cognizant (NASDAQ: CTSH) announced advancements built on NVIDIA AI aimed at accelerating the cross-industry adoption of AI technology in five key areas: enterprise AI agents, industry-specific large language models (LLMs), digital twins for smart manufacturing, foundational infrastructure for AI, and the capabilities of Cognizant's Neuro® AI platform to integrate NVIDIA AI technology and orchestrate across the enterprise technology stack. Cognizant is working with global clients to help them scale AI value efficiently, leveraging extensive industry experience and a comprehensive AI ecosystem comprising infrastructure, data, models, and agent development powered by proprietary platforms and accelerators. NVIDIA AI plays a key role in Cognizant's AI offerings, with active client engagements underway across industries to enable growth and business transformation. "We continue to see businesses navigating the transition from proofs of concept to larger-scale implementations of enterprise AI," said Annadurai Elango, president, Core Technologies and Insights, Cognizant. "Through our collaboration with NVIDIA, Cognizant will be building and deploying solutions that accelerate this process and scale AI value faster for clients through integration of foundational AI elements, platforms and solutions." "From models to applications, enterprise AI transformation requires full-stack software and infrastructure with access to domain-specific data," said Jay Puri, executive vice president of Worldwide Field Operations, NVIDIA. "The Cognizant Neuro AI platform is built with NVIDIA AI to deliver specialized LLMs and applications to ready businesses for the era of AI with reasoning agents and digital twins." At NVIDIA GTC 2025, Cognizant presented its intent to deliver offering updates across the following five areas: Enterprise AI agentification powered by Cognizant® Neuro AI Multi-Agent Accelerator: Running on NVIDIA NIM™ microservices, this framework will enable clients to rapidly build and scale multi-agent AI systems for adaptive operations, real-time decision-making and personalized customer experiences. With these frameworks clients can create and orchestrate agents using a low-code framework or use pre-built agent networks for various enterprise functions and industry-specific processes such as sales, marketing, and supply chain management. The frameworks also allow clients to easily integrate third-party agent networks and most LLMs. Building multi agents for scale: Cognizant works to enhance business operations through the use of multi-agent systems and integration with NVIDIA NIM, NVIDIA Blueprints, and NVIDIA Riva speech AI. The company will be developing a future-proof agent architecture that supports modular and adaptable agent design to meet evolving needs and the long-term viability and adaptability of AI solutions. This includes pre-built integrations with security guardrails and human oversight. This approach aims to enable enterprises to develop and deploy market-ready applications tailored to their specific needs using the pre-built agent catalog. Examples include industry agents such as insurance claims underwriting multi-agent systems, appeals and grievances multi-agent systems, automated supply chain multi-agent systems and contract management multi-agent systems. Industry LLMs: Cognizant is developing industry-oriented LLMs powered by NVIDIA NeMo and NVIDIA NIM. These solutions are tailored to meet the unique needs of different industries and build on Cognizant's deep industry expertise to drive innovation and improve business outcomes. For example, Cognizant has developed a fine-tuned language model to transform healthcare administrative processes. This system will leverage Cognizant's domain expertise and NVIDIA technology to enhance medical code extraction and support higher accuracy, reduced errors, and better compliance with HIPAA and GDPR standards. It is designed to help clients cut costs, decrease latency, improve revenue cycle management and help ensure accurate risk adjustment. In internal Cognizant benchmarking, the model has demonstrated effectiveness in reducing effort by 30-75 percent, boosting coding accuracy by 30-40 percent, and accelerating time to market by 40-45 percent. Industrial digital twins: Cognizant's smart manufacturing and digital twin offerings, accelerated by NVIDIA Omniverse™, will aim to drive digital transformation by combining NVIDIA Omniverse's synthetic data generation, accelerated computing, and physical AI simulation technologies to address challenges in manufacturing operations and supply chain management. These capabilities will be designed to assist clients in enhancing plant layout and process simulations with real-time insights and predictive analytics, while also supporting improved operational efficiency and optimized plant capital expenditure. This offering enables integration of diverse data from applications, systems and sensors with synthetic data, allowing clients to simulate various scenarios and find solutions to issues in the plant. Additionally, by building the necessary digital infrastructure, including IT systems and skilled personnel, Cognizant's offerings can be used to create and manage digital twins for large-scale systems, such as factories, smart grids, warehouses, or entire cities, with precision and efficiency. Infrastructure for AI: Implementing AI effectively requires robust AI infrastructure and data prepared for AI. Cognizant's infrastructure for AI, accelerated by NVIDIA, will provide clients access to NVIDIA AI technology via "GPU as a Service", along with secure and managed infrastructure. This helps ensure that AI models can be run in various environments, including the cloud, data centers or at the edge. Additionally, Cognizant intends to use NVIDIA RAPIDS™ Accelerator for Apache Spark to help clients accelerate data pipelines for AI implementations, facilitating efficient and scalable operations. In one example implementation for a large healthcare client in the U.S., use of Cognizant's infrastructure for AI resulted in a 2.7x cost efficiency improvement and a 1.8x enhancement in the performance of their Spark workloads. "As we enter the era of AI industrialization, enterprises are seeking to accelerate the value velocity of their AI investments—focusing on outsized economic impact, agentic-led workflow transformation, and industry-specific deployments," said Nitish Mittal, Partner, Everest Group. "Cognizant's deepening partnership with NVIDIA signals the right trajectory for forward-thinking enterprises aiming to unlock breakthrough value in the AI era." About CognizantCognizant (Nasdaq: CTSH) engineers modern businesses. We help our clients modernize technology, reimagine processes and transform experiences so they can stay ahead in our fast-changing world. Together, we're improving everyday life. See how at www.cognizant.com or @cognizant. For more information, contact: U.S. Name Ben Gorelick Email benjamin.gorelick@cognizant.com  Europe / APAC Name Christina Schneider Email christina.schneider@cognizant.com  India Name Rashmi Vasisht Email rashmi.vasisht@cognizant.com   

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 180 加入收藏 :
2025 年 5 月 25 日 (星期日) 農曆四月廿八日
首 頁 我的收藏 搜 尋 新聞發佈