ATLANTA, July 1, 2024 /PRNewswire/ -- AMI®, the global leader in Dynamic Firmware for worldwide computing, today is pleased to announce the complete validation and compliance of its MegaRAC® SP-X manageability solution with the NVIDIA Validation Suite (NVVS) and on NVIDIA MGX modular platforms, powered by the NVIDIA GH200 Grace Hopper Superchip. AMI's industry-leading MegaRAC SP-X Server Management Solution offers unparalleled remote management capabilities for server platforms. Its seamless performance and reliability consistently ensure the stability, safety, and security of managed servers. NVVS serves as a purpose-built, system-level tool intended for use in production environments to evaluate cluster-readiness levels prior to workload deployment. The validation procedure aims to address hardware defects, software and system configuration issues, diagnostic and logging deficiencies, performance degradation, and much more. Its meticulous execution helps resolve these issues, facilitating the seamless deployment of cloud-ready AI platforms. AMI continues to deploy its global resources to support CSPs/OEMs/ODMs worldwide, powering its NVIDIA GH200 Grace Hopper Superchip-based server platforms designed for high-performance computing (HPC) and AI applications. AMI is a member of the NVIDIA Partner Network. "By adding compliance for the NVIDIA Validation Suite to our MegaRAC SP-X Server Management Solution, we are delivering high levels of confidence and compatibility to CSPs, OEMs, and ODMs as they roll out their latest NVIDIA MGX server platforms with NVIDIA Grace CPU and NVIDIA Grace Hopper Superchips," says Anurag Bhatia, SVP - Global Manageability Solutions Group at AMI. Follow AMI on LinkedIn and X/Twitter to receive the latest news and announcements. AMI® and MegaRAC® are registered trademarks of AMI in the US and/or elsewhere. All other trademarks and registered trademarks are the property of their respective owners. About AMI AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world's compute platforms from on-premises to the cloud to the edge. AMI's industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry.
TAIPEI, July 1, 2024 /PRNewswire/ -- Solomon, a leader in advanced vision and robotics solutions, is excited to announce a collaboration with NVIDIA at COMPUTEX 2024. This collaboration focuses on integrating Solomon's product offerings with the NVIDIA Isaac robotics platform to enhance Solomon's 3D robotics vision and augmented intelligence solutions. At COMPUTEX 2024, Solomon's VP of Research & Development, Xuan-Loc Nguyen, introduces the META-aivi AR + AI vision system for SOP validation to visiting NVIDIA guests Madison Huang and Lori Huang. "We are thrilled to integrate the NVIDIA Isaac platform into our products," said Johnny Chen, CEO of Solomon. "NVIDIA's advanced AI and robotics tools will enhance our product capabilities in 3D machine vision, robotics control, and augmented intelligence, helping drive greater innovation in industrial automation." A key highlight of this collaboration is Solomon's bin-picking system, enhanced by NVIDIA Isaac Manipulator accelerated libraries, which are based on NVIDIA Isaac ROS. It delivers 8 times faster path planning and execution and reduces path singularity occurrences by 50% compared to conventional algorithms. Combined with AccuPick's advanced image recognition, these advancements enable smaller robot cells without compromising cycle time, essential for efficient bin picking in factories and order picking in logistics centers. The NVIDIA Isaac platform leverages generative AI to offer powerful foundational models for robotics. Solomon will continue to deliver innovative products and applications by incorporating multiple NVIDIA Isaac technologies, with the goal of bringing smarter automation to manufacturing, retail, logistics, and other sectors. "The era of AI robotics has arrived," said Deepu Talla, Vice President of Robotics and Edge Computing at NVIDIA. "To meet this demand, NVIDIA is building a full-stack, accelerated robotics platform to enable ecosystem leaders such as Solomon advance deployment of autonomous machines across the world's largest industries." About Solomon: Solomon provides advanced vision solutions, including 3D bin picking, vision-guided robots, AI-based defect inspection, and augmented intelligence blending AI and AR. What sets Solomon apart is the embedded rapid AI model training, allowing users to customize models with minimal time investment. A strong focus on productivity and innovation positions Solomon at the forefront of industrial AI and 3D vision applications, excelling in defect detection, bin picking, and workforce optimization. CONTACT: Anu Kanwar Business Development Manageranu_kanwar@solomon-3d.com
BANGKOK, June 26, 2024 /PRNewswire/ -- GreenNode, a business unit of VNG specializing in AI Cloud services and a preferred NVIDIA Cloud Partner (NCP), has officially launched a large-scale AI Data cluster in Bangkok, Thailand. GreenNode aims to become Asia's leading AI Cloud service provider by supercharging regional AI businesses with high-performance computing (HPC) AI resources. From left to right, Founder Le Hong Minh (VNG Corporation), CEO Nguyen Le Thanh (GreenNode), Senior Director Dennis Ang (NVIDIA) and CEO Lionel Yeo (STT GDC SEA) This facility is one of Southeast Asia's first AI-ready hyperscale data centers, operated by GreenNode's Cloud Operation Excellence. Mr. Dennis Ang, Senior Director, Enterprise Business, ASEAN and ANZ Region, NVIDIA, emphasized that to stay ahead in the current wave of GenAI, companies need two essential factors: firstly, AI data centers, and secondly, AI Factories. These are the areas where NVIDIA is closely collaborating with the VNG GreenNode and ST Telemedia Global Data Centres (STT GDC) teams. "Together, we have completed and delivered these two key elements to our customers. Congratulations to VNG GreenNode on their success, and Nvidia eagerly anticipates further collaboration opportunities in the future." GreenNode's AI Cloud cluster in STT Bangkok 1 meets global standards, holding LEED Gold certification[1], TIA-942 Rating-3 DCDV[2], and Uptime Tier III standards. According to GreenNode, the company aims to provide a one-stop solution for all businesses' AI journeys, deploying an AI infrastructure with a dedicated 20MW capacity and equipping it with the latest InfiniBand network, offering up to 3.2Tbps bandwidth for GreenNode's servers and unique multi-tenant hyper-scale storage platform, ready to deliver robust AI Cloud services and GPU infrastructure to customers. During his speech at the ceremony, Mr. Lionel Yeo, CEO – Southeast Asia, ST Telemedia Global Data Centres, said: "Over the next four years, investment in AI will significantly increase, with about 30% coming from the Asia-Pacific region. This area is becoming increasingly dynamic, making collaborations like today's more meaningful. Congratulations to VNG GreenNode for successfully commercializing AI Cloud in just six months. I believe that together, we will contribute to positioning Asia at the forefront of the global technology wave in the coming years." GreenNode's product portfolio comprises three main groups: Bare Metal GPUs with thousands of H100 Tensor Core GPUs, Machine Learning (ML) Platform, and priority access to NVIDIA AI Factory. GreenNode is a product-focused company that not only delivers infrastructure but also trains advanced AI models & platforms, leveraging its expertise to help startups with their own models. This approach constitutes a unique selling proposition that emphasizes a commitment to GreenNode's Operation Excellence. GreenNode is also pioneering in Southeast Asia by building and offering a remote parameter management platform, allowing global customers to flexibly access and scale training parameters with ease, saving time and effort for businesses of all sizes. "This milestone has yielded positive signals both in technological advancement and business performance, as the concept was swiftly implemented into reality in a short period. This is just the first step, and we are committed to long-term investment to become a leading provider of AI Cloud services in Southeast Asia," stated VNG Founder and CEO Le Hong Minh. Recently, GreenNode has secured millions of dollars in deals to provide customers with AI infrastructure and advanced AI solutions worldwide. "With thousands of powerful GPU chips from Nvidia and STT GDC's international standard data centers, GreenNode aims to be a one-stop solution provider for global clients. "However, there are many things we need to do, including continued investment in R&D to be an AI Pioneer in the region", shared Mr. Nguyen Le Thanh, CEO of GreenNode & VNG Digital Business. At the VNG 2024 Annual Shareholders' Meeting, Founder and CEO VNG Le Hong Minh emphasized three strategic growth drivers for the company in the coming years: AI, "Go Global" and Platform. VNG stands out as one of the few SEA tech companies rapidly and fully embracing AI, with significant investments in infrastructure, platforms, and applications. VNG aims to be a leading AI service provider in Vietnam and the region. About GreenNode GreenNode, a leading Nvidia Cloud Provider in Asia, specializes in AI infrastructure and AI product innovation. Operating a large-scale GPU cloud in Thailand and Vietnam that conforms to Nvidia Reference Architecture, GreenNode is committed to meeting the global demand for AI and ensuring unparalleled service reliability and technological excellence. The company's rapid expansion in the APAC region is supported by VNG Digital Business, renowned for its robust digital solutions delivered to over 1,000 enterprises. About VNG Established in 2004, VNG is a leading digital ecosystem in Vietnam, offering a diverse portfolio of products and services across four main groups: Online Games, Zalo & AI, Fintech, and Digital Business. VNG's mission is to "Build Technologies and Grow People. From Vietnam to the world." VNG's innovations have significantly enhanced users' global digital interactions. The Company currently employs over 3,600 staff across ten international cities. [1] issued by the U.S. Green Building Council for energy efficiency and low environmental impact [2] certified by the American Institute of IT and Communications for operational sustainability and disaster recovery
火報記者 陳聖偉 / 綜合報導 面臨不斷變化的市場需求和技術進步,企業必須不斷創新以保持競爭優勢。戴爾科技集團攜手NVIDIA及AI生態系合作夥伴共同推出Dell AI Factory解決方案,整合豐富的AI專案經驗與先進基礎架構,旨在幫助企業利用AI加速創新步伐。 面對瞬息萬變的商業環境,企業創新力將成為能否勝出的重要關鍵。根據戴爾科技集團公布的「創新催化劑調查」指出,56%受訪者認為創新是商業策略的重要部分,81%受訪者認為 AI 與生成式 AI 將使產業出現激烈變革。儘管生成式AI是實踐創意的重要助力,但企業推動過程勢必會面臨技術、AI人才、資料主權、品質與保護、資安與合規性、應用場景、投資報酬以及持續維運確保成功等眾多挑戰。而早運用AI技術作為提升員工工作效率的戴爾科技集團, 全新推出採用NVIDIA 及AI生態系各層級伙伴的Dell AI Factory解決方案,可助企業將構想轉化為創新、可行方案,進而強化在市場競爭力。 全球最廣泛的AI解決方案組合,從桌面到數據中心再到雲。圖源: Dell Inc. 戴爾科技集團技術副總經理李百飛說,生成式AI在戴爾科技集團應用範圍非常廣泛,涵蓋研發、生產、行銷、銷售支援、財務、人資、維運、售後服務等類別,其中又以Dell Chat-私域聊天機器人助手、Translate-翻譯好幫手、 Copilot- 微軟企業版 ChatGPT等AI應用特別受到Dell內部知識工作者的青睞,對於提升員工工作效率帶來極大幫助。我們選擇在公司內部建置生成式AI模型,主要希望能妥善保護機密資料安全,這也是多數企業最在意的部分。我們擁有非常完整的AI基礎架構、實際導入與運用經驗,加上與NVIDIA及AI生態系各層級伙伴的緊密合作關係,可助客戶加速運用AI推動業務轉型、提高生產力、增加客戶體驗、助力營收增長與創新突破,這正是其他競爭對手不及的最大優勢。 根據戴爾科技集團研究報告顯示,73%受訪者認為資料與智慧財產具有高價值,但擔心生成式 AI 工具可能遭到第三方存取。Dell AI Factory解決方案可幫助客戶在地端及雲端解決資料安全問題,更快將構想轉化為創新,是實現AI加速提升企業競爭力的最佳方案。 戴爾科技集團力推 AI Factory。圖源: Dell Inc. 完整技術顧問服務 助企業釐清AI需求 善於掌握市場趨勢與企業需求的戴爾科技集團,早已推出AI基礎架構多時,且透過持續功能優化方式,滿足不同情境的多元AI專案需求,至今向來深受各產業用戶肯定 。在2024 年 Forrester 公布 AI 基礎架構解決方案報告中,戴爾科技集團評為領導者,其中在架構、配置、培訓標準、願景、合作夥伴生態系統,以及支援服務和產品標準方面獲得最高分。 李百飛指出,Dell AI Factory解決方案核心概念分成四大階段,分別是協助客戶確定 AI 戰略、準備 AI 模型所需資料、生成式AI 平台部署與模型測試,以及AI模型維運與持續優化。在多數企業欠缺AI人才的狀況下,對於AI技術通常會有很多誤解,也不知道從何著手導入。戴爾科技集團擁有非常完整的AI顧問團隊與生態系夥伴,能協助企業釐清真正需求,甚至在後續維運階段持續將新資料融入AI模型,以便能稱職扮演企業成長的最佳助手。 而企業引進Dell AI Factory解決方案方式也非常簡單,首先,客戶可以從使用生成式AI的場景中選擇企業要落地的項目,例如:數位助理、Co-Pilot、企業知識大腦、LLM模型RAG增強、微調、全訓練…等,Dell 專業AI服務團隊可以端到端協助規劃合適架構及客製化落地方案。其次,對於正在推動AI專案的用戶,也能從Dell AI Factory解決方案中選擇合適的伺服器、工作站、網路、儲存設備等,快速建置已經AI驗證的基礎架構。最後,Dell AI Factory解決方案在原有買斷制之外,也提供深受企業喜愛的APEX訂閱制服務,在不會大幅排擠其他專案預算的前提下啟動企業AI轉型之旅。 Dell數據湖倉一體機解決方案,協助企業從大數據中快速提取高質量資訊滿足AI應用需求 前面提到,Dell AI Factory涵蓋運算、高速網路、儲存、用戶端裝置、軟體及服務功能等集成驗證架構,滿足企業從AI模型建立、微調、檢索增強、推理,以及導入AI各種應用情境等。但企業所擁有的數據才是AI應用差異化競爭優勢的關鍵,在全球市場備受肯定的Dell Data Lakehouse,正是一個整合Starburst的現代化數據湖倉解決方案,讓企業能在地端、多雲與邊緣端等組合環境中,用最即時、最省力、滿足數據主權的要求下,同時進行數據高速查詢、探勘、處理及分析等工作,為AI應用提供高品質的數據服務。 李百飛表示,隨著NVIDIA Grace Blackwell超級晶片問世,戴爾科技集團也同步推出採用機架式高密度液冷架構,為複雜AI工作負載提供最佳效能密度。以Dell PowerEdge 伺服器為例,即支援新一代NVIDIA GPU晶片,如NVIDIA B200 Tensor Core GPU,預計可提供高達15倍AI推理效能。除此之外,Dell PowerEdge 伺服器也可搭配Dell檢索增強生成(RAG)方案,可提升生成式AI模型品質、結果準確性與可用性。至於 Dell PowerScale則是全球首款通過NVIDIA DGX SuperPOD系統驗證的乙太網路儲存解決方案,讓企業享有更快、更有效率與更具成本效益的AI儲存架構。 在生成式AI浪潮下,戴爾科技集團將透過Dell AI Factory解決方案的一站式服務,助企業運用AI推動創新與轉型,為長遠發展與競爭優勢奠定穩健基石。
Hewlett Packard Enterprise(NYSE: HPE)與NVIDIA(輝達)今日宣布推出NVIDIA AI Computing by HPE產品組合,包含雙方共同開發的AI解決方案以及聯合進入市場(GTM)策略,協助企業加速採用生成式AI技術。此產品組合的核心產品之一是HPE Private Cloud AI,這是業界首次將NVIDIA AI運算、網路與軟體以及HPE AI儲存、運算和HPE GreenLake雲端平台深度整合的解決方案。這個解決方案能協助各種規模的企業以高效節能、快速靈活且永續的方式開發和部署生成式AI應用。HPE Private Cloud AI導入了新OpsRamp AI copilot,以提升IT的工作負載及效率,此外也提供了自助式雲端體驗與完整的生命週期管理,以及四種適合不同AI工作負載及使用案例的配置。NVIDIA AI Computing by HPE的所有產品和服務將透過聯合進入市場(GTM)策略進行推廣,包括銷售團隊和通路夥伴、培訓以及Deloitte、HCLTech、Infosys、TCS 和Wipro等全球系統整合商合作夥伴,協助各個產業的企業順利運行複雜的AI工作負載。在HPE Discover的主題演講上,HPE總裁暨執行長Antonio Neri與NVIDIA創辦人暨執行長黃仁勳共同宣布推出NVIDIA AI Computing by HPE。此項發布不僅進一步深化兩家公司長達數十年的合作關係,也展現雙方在此領域投入的大量時間和資源。「生成式AI擁有龐大的企業轉型潛力,然而因缺乏整合所導致的AI技術複雜性帶來了許多風險和阻礙,不僅使企業難以大規模導入此技術,還可能危及企業最寶貴的資產,也就是企業專有的資料,」Neri表示。「為了讓企業充分發揮生成式AI的潛力,HPE和NVIDIA共同開發了一站式私有雲AI解決方案,使企業能夠將資源集中在開發新AI應用案例上,進而提升生產力並開創新收入來源。」「生成式AI和加速運算正帶來徹底的改變,使得各行各業爭相加入這場工業革命,」黃仁勳表示。「NVIDIA和HPE從未如此緊密地整合雙方的技術—藉由結合整個NVIDIA AI運算堆疊與HPE的私有雲技術,為企業客戶和AI專業人員提供最先進的運算基礎架構和服務,協助開拓AI疆界。」HPE與NVIDIA共同開發Private Cloud AI產品組合HPE Private Cloud AI提供獨特的雲端體驗,可加速創新和投資回報,同時管理企業的AI風險。此產品組合提供以下優勢:● 支援推論、微調和利用專有資料的RAG AI工作負載。● 滿足企業在資料隱私、安全、透明度和治理方面的控管需求。● 具備ITOps和AIOps功能的雲端體驗,可提高生產力。● 快速靈活地因應未來的AI商機和成長需求。HPE Private Cloud AI中精心設計的AI和資料軟體堆疊NVIDIA AI Enterprise軟體平台是AI和資料軟體堆疊的基礎,該平台包括NVIDIA NIM™推論微服務。NVIDIA AI Enterprise加速了資料科學工作流程,並簡化生產級copilots和其他生成式AI應用的開發和部署。隨附的NVIDIA NIM提供易於使用的微服務,可優化AI模型推論,使企業能順利將各種應用情境的AI模型從原型階段轉移至安全部署。在這過程中,HPE AI Essentials能進一步增強NVIDIA AI Enterprise和NVIDIA NIM的功能。HPE AI Essentials軟體提供了可隨時運作的AI和資料基礎工具,並具備統一的控制台,以提供靈活的解決方案、持續的企業支援和可信賴的AI服務,例如資料和模型合規性以及可擴展的功能,確保AI工作流程在整個生命週期中具備合規性、可解釋性和可重現性的要求。為了提供最佳效能的AI和資料軟體堆疊,HPE Private Cloud AI提供完全整合的AI基礎架構堆疊,包括NVIDIA Spectrum-X乙太網路、HPE GreenLake for File Storage,以及支援NVIDIA L40S、NVIDIA H100 NVL Tensor Core GPU和NVIDIA GH200 NVL2平台的HPE ProLiant伺服器。HPE GreenLake雲端平台打造的雲端體驗HPE Private Cloud AI提供由HPE GreenLake雲端平台打造的自助雲端體驗。透過基於平台的單一控制台,HPE GreenLake雲端服務具備可管理和觀測的工具,可自動運行、協調和管理混合環境中的端點、工作負載和資料。它也包含了工作負載和端點的永續性指標。HPE GreenLake雲端平台與OpsRamp AI基礎架構觀測性和copilot輔助工具OpsRamp的IT運作已與HPE GreenLake雲端平台整合,為所有HPE產品和服務提供觀測性和AIOps功能。OpsRamp現在能為端到端NVIDIA加速運算堆疊提供觀測性,包括NVIDIA NIM和AI軟體、NVIDIA Tensor Core GPU、AI叢集,以及NVIDIA Quantum InfiniBand和NVIDIA Spectrum乙太網路交換器。IT管理員能從中獲得洞察,以識別異常狀況,並監控混合多雲環境中的AI基礎架構和工作負載。全新的OpsRamp運作copilot利用NVIDIA加速運算平台上的對話助理,來分析大型資料集,並從中獲得洞察,以提升營運管理的生產力。OpsRamp也將與CrowdStrike API整合,讓客戶可以透過統一的服務地圖查看整個基礎架構和應用中的端點安全狀況。加速利用AI創造價值—與全球系統整合商擴大合作為協助企業加速價值的創造,開發以產業為中心的AI解決方案,以及具明確商業利益的使用案例,Deloitte、HCLTech、Infosys、TCS和WIPRO宣布支援NVIDIA AI Computing by HPE產品組合和HPE Private Cloud AI,以作為他們策略性AI解決方案和服務的一部分。HPE新增對NVIDIA最新GPU、CPU和Superchip的支援● HPE Cray XD670支援八個NVIDIA H200 NVL Tensor Core GPU,是大型語言模型(LLM)建構者的絕佳選擇。● HPE ProLiant DL384 Gen12伺服器搭載NVIDIA GH200 NVL2,非常適合使用較大型模型或RAG的LLM使用者。● HPE ProLiant DL380a Gen12伺服器支援最多八個NVIDIA H200 NVL Tensor Core GPU,適合需要靈活擴展生成式AI工作負載的LLM用戶。● HPE將於NVIDIA GB200 NVL72 / NVL2以及新NVIDIA Blackwell、NVIDIA Rubin和NVIDIA Vera架構上市時提供相關支援。通過NVIDIA DGX BasePOD和NVIDIA OVX系統認證的高密度檔案儲存HPE GreenLake for File Storage已通過NVIDIA DGX BasePOD認證和NVIDIA OVX™儲存系統驗證,為客戶提供經過驗證的企業檔案儲存解決方案,以加速大規模AI、生成式AI和GPU密集型工作負載的運作。HPE將在即將推出的 NVIDIA 參考架構儲存認證計劃中推出相關解決方案。上市時程● HPE Private Cloud AI預計於今年秋季全面上市。● 搭載NVIDIA H200 NVL Tensor Core GPU的HPE ProLiant DL380a Gen12伺服器預計於今年秋季全面上市。● 搭載雙NVIDIA GH200 NVL2的HPE ProLiant DL384 Gen12伺服器預計於今年秋季全面上市。● 搭載NVIDIA H200 NVL的HPE Cray XD670伺服器預計於今年夏季全面上市。
~ Initial Access to 5,120 of NVIDIA’s Latest and Most Powerful GPUs ~ ~ Develop and Deploy Advanced AI Enterprise Solutions for Fintech, Telecom, and Governments, Leveraging DIGIASIA’s Existing Infrastructure and NVIDIA’s Superior Hardware ~ ~ Significant Expansion of DIGIASIA’s Comprehensive “Fintech as a Service” Ecosystem, Tapping into an Estimated USD 200-300 Billion Annual Global Opportunity for AI Across Financial Services1 ~ ~ Catalyst to Achieve Significant Growth of Top and Bottom Line Beginning in the Fourth Quarter of 2024 ~ NEW YORK, June 24, 2024 (GLOBE NEWSWIRE) -- Digi Tech Limited, the UAE based subsidiary of DIGIASIA Corp. (NASDAQ: FAAS) (“DIGIASIA” or the “Company”), a leading Fintech as a Service (“FaaS”) ecosystem provider, has secured allocation of an initial tranche of 5,120 NVIDIA H200 GPUs (NASDAQ: NVDA). The access to NVIDIA’s GPUs will propel DIGIASIA’s development of cutting-edge AI solutions for enterprise customers in fintech, telecom, and government sectors. The first iteration of these NVIDIA-powered solutions is expected to be deployed by the fourth quarter of 2024. DIGIASIA will base operations for its AI initiatives out of the Dubai International Financial Center (“DIFC”) in the UAE leveraging on the UAE and the DIFC’s global leadership in the advancement of advanced AI solutions. Structure of Transaction and AI Fintech Platform DIGIASIA has been allocated an initial tranche of 5,120 NVIDIA H200 GPUs with the option for an additional 10,240 GPUs. The total market value of the initial tranche exceeds $400 million and exceeds $1.2 billion with the additional option. Initially, DIGIASIA will deploy these advanced GPUs in Southeast Asia, India, and the Middle East, with plans for global expansion. The integration of NVIDIA’s GPUs will significantly enhance DIGIASIA’s fintech infrastructure, boosting productivity and efficiency. This will enable DIGIASIA enterprise clients to implement advanced solutions such as AML, fraud detection, KYC, smart dealer lending, branchless banking, automated customer journeys, and deep encryption of financial data. Market Opportunity DIGIASIA’s access to NVIDIA GPUs opens up a substantial market opportunity, potentially tapping into a USD 200-300 billion annual global market in financial services. By leveraging NVIDIA’s cutting-edge GPUs and AI models, DIGIASIA aims to deliver advanced AI fintech solutions across Southeast Asia, India and the Middle East. DIGIASIA plans to utilize its existing enterprise partners and identify incremental strategic partners for AI datacenter hosting to support these innovative solutions. Executive Insights Prashant Gokarn, CEO of DIGIASIA, stated, "NVIDIA GPUs are at the core of any AI-based solution. We are thrilled with this allocation, which allows us to develop the next generation of DIGIASIA’s embedded finance platform with generative AI, enhancing precision and productivity for enterprises. This will allow us to continue to support our existing and new enterprise customers in the AI revolution." Subir Lohani, CFO and Chief Strategy Officer of DIGIASIA, commented, "Since going public in April, we have consistently executed our strategy to provide innovative solutions to our enterprise clients and expand our geographic reach. We plan to roll out the initial NVIDIA-powered solutions by the fourth quarter of 2024, driving significant growth and attractive returns. We are excited to grow this initiative from the DIFC Innovation Hub / AI Campus in the UAE which has become a global hub for the advancement of AI solutions." Forward-Looking Statements: This press release may contain forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. The words “believe”, “expect”, “anticipate”, “project”, “targets”, “optimistic”, “confident that”, “continue to”, “predict”, “intend”, “aim”, “will” or similar expressions are intended to identify forward-looking statements. All statements other than statements of historical fact are statements that may be deemed forward-looking statements. These forward-looking statements including, but not limited to, statements concerning DIGIASIA and the Company’s operations, financial performance and condition are based on current expectations, beliefs and assumptions which are subject to change at any time. DIGIASIA cautions that these statements by their nature involve risks and uncertainties, and actual results may differ materially depending on a variety of important factors such as government and stock exchange regulations, competition, political, economic and social conditions around the world including those discussed in DIGIASIA’s Form 20-F under the headings “Risk Factors”, “Management’s Discussion and Analysis of Financial Condition and Results of Operations” and “Business Overview” and other reports filed with the Securities and Exchange Commission from time to time. All forward-looking statements are applicable only as of the date it is made and DIGIASIA specifically disclaims any obligation to maintain or update the forward-looking information, whether of the nature contained in this release or otherwise, in the future. Investor Contact: MZ North America Email: FAAS@mzgroup.us Company Contact: Subir Lohani Chief Strategy Officer and CFO Email: subir.lohani@digiasia.asia
#NVIDIA
everRun
請先登入後才能發佈新聞。
還不是會員嗎?立即 加入台灣產經新聞網會員 ,使用免費新聞發佈服務。 (服務項目) (投稿規範)