關於 cookie 的說明

本網站使用瀏覽器紀錄 (Cookies) 來提供您最好的使用體驗,我們使用的 Cookie 也包括了第三方 Cookie。相關資訊請訪問我們的隱私權與 Cookie 政策。如果您選擇繼續瀏覽或關閉這個提示,便表示您已接受我們的網站使用條款。

搜尋結果Search Result

符合「SUPERCOMPUTER」新聞搜尋結果, 共 22 篇 ,以下為 1 - 22 篇 訂閱此列表,掌握最新動態
Fujitsu awarded contract to design next-generation flagship supercomputer FugakuNEXT

Accelerating scientific and technological innovation with Made-in-Japan CPU technology KAWASAKI, Japan, June 18, 2025 /PRNewswire/ -- Fujitsu Limited today announced that it has been awarded a contract by the Japanese research and development institute RIKEN to design a next-generation flagship supercomputer. The contract for the supercomputer, provisionally named "FugakuNEXT," encompasses the overall system, computing nodes, and CPU components and the basic design phase is scheduled to run until February 27, 2026. Supporting Japan's leadership in science and technology with a next-generation computing platform The rapid growth of generative AI and other technologies is driving increased demand for diverse and large-scale computing resources for R&D. According to a report by the HPCI Steering Committee established by the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT), the importance of "AI for Science," i.e., initiatives that combine AI with simulation technology, real-time data, and automated experiments, is increasing, prompting nations to prioritize advanced computing infrastructure. Japan needs a new, flexible platform that will address these evolving demands, enable its leadership in science, technology, and innovation and facilitate further societal and industrial advancement. The HPCI Program Steering Committee has chosen RIKEN as the primary entity responsible for developing this platform, and RIKEN has chosen Fujitsu to design it. Building a foundation for future innovation with advanced CPU design FugakuNEXT will leverage Fujitsu's established supercomputing expertise, incorporating advanced technologies from the FUJITSU-MONAKA3 general-purpose CPU currently under development, and will cater to evolving customer needs by allowing for seamless integration with GPUs and other accelerators. FUJITSU-MONAKA, built on leading-edge 2-nanometer technology, employs Fujitsu's unique technologies, including a microarchitecture optimized for advanced 3D packaging and ultra-low voltage circuit operation. It aims to deliver both high performance and power efficiency across diverse next-generation computing applications, from edge computing to data centers, while ensuring safety, security, and ease of use. The successor CPU to FUJITSU-MONAKA, tentatively named "FUJITSU-MONAKA-X," intended for use in FugakuNEXT, will not only inherit and accelerate existing Fugaku application assets but also incorporate state-of-the-art AI processing acceleration capabilities to meet growing AI demands. This CPU is intended for broad application across sectors supporting society and industry, extending beyond its role in FugakuNEXT. Through its core, Made-in-Japan CPU technology, Fujitsu will continue to deliver innovation and build trust, contributing to a world-class computing infrastructure and advancing Japanese science and technology. For full release click here

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 151 加入收藏 :
Fujitsu and Yokohama National University achieve world's first real-time prediction of tornadoes associated with typhoons using supercomputer Fugaku

KAWASAKI, Japan, Feb. 12, 2025 /PRNewswire/ -- Fujitsu Limited and Yokohama National University today announced the achievement of the world's first real-time prediction of multiple typhoon-associated tornadoes using advanced supercomputing technology, significantly improving disaster preparedness. The new technology utilizes optimized large-scale parallel processing coupled with the enhanced Cloud Resolving Storm Simulator (CReSS), a weather simulator developed by Professor Kazuhisa Tsuboki on Fujitsu's Fugaku supercomputer. This allows for a single, high-resolution simulations encompassing both large-scale typhoons and smaller-scale tornadoes, resulting in accurate, real-time predictions. Previously, during simulations of Typhoon No. 10's tornadoes, which hit Japan's Kyushu area in August 2024, it took more than 11 hours to predict whether or not tornadoes would occur, making the predictions not practicably applicable. This technology was able to drastically reduce prediction time to 80 minutes, allowing the two partners to predict the occurrence of a tornado four hours in advance. This prediction calculation used only 5% of Fugaku's computational resources, indicating the potential for even larger-scale and faster predictions in the future. The two partners will release the enhanced CReSS to the research community within fiscal year 2024, significantly improving the prediction of severe weather events and enhancing disaster mitigation efforts. For full release click here Background Approximately 20% of tornadoes in Japan occur alongside typhoons. In response to increasing tornado damage, Japan began issuing tornado warnings in 2008. However, compared to weather phenomena like precipitation, which can be predicted with high accuracy, tornadoes are difficult to predict due to their small scale and short duration. Tornado warnings currently have a validity period of about one hour, and there is a demand for longer warning periods. Fujitsu and Yokohama National University initiated a joint research project in November 2022, aiming to address societal challenges related to increasingly severe typhoons exacerbated by global warming. This collaboration, conducted under the Fujitsu Small Research Lab's "Fujitsu - Yokohama National University Typhoon Science and Technology Research Center Collaborative Research Laboratory," focuses on understanding typhoon formation mechanisms and accelerating and improving the accuracy of typhoon prediction simulations.

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 1063 加入收藏 :
MediaTek Collaborates with NVIDIA on the New NVIDIA GB10 Grace Blackwell Superchip Powering the NVIDIA Project DIGITS Personal AI Supercomputer

MediaTek brings its design expertise in Arm-based SoC performance and power efficiency to groundbreaking device for AI researchers and developers LAS VEGAS, Jan. 7, 2025 /PRNewswire/ -- MediaTek today announced it has collaborated with NVIDIA on the design of the NVIDIA GB10 Grace Blackwell Superchip for NVIDIA Project DIGITS, a personal AI supercomputer. MediaTek is the world's No. 1 chip supplier for smartphones, smart TVs, Arm-based Chromebooks, Android tablets, and voice assistant devices (VAD). The company has invested heavily in bringing the best AI, connectivity and multi-media experiences to Arm-based system-on-a-chip (SoC) devices, across different platforms and users, with best-in-class power efficiency. MediaTek has brought all its technology expertise to this collaboration with NVIDIA to deliver a market-leading platform. "Our collaboration with NVIDIA on the GB10 Superchip aligns with MediaTek's vision of helping make great technology accessible to anyone," said MediaTek Vice Chairman and CEO Rick Tsai. "Along with NVIDIA, we are working to usher in a new era of innovation and make AI ubiquitous." "The age of AI is here. The combination of MediaTek's industry-leading CPU performance and power efficiency with NVIDIA's accelerated computing technologies will drive the next wave of innovation," said Jensen Huang, founder and CEO of NVIDIA. "Project DIGITS, with the new GB10 Superchip designed with MediaTek, makes our most powerful Grace Blackwell platform more accessible – placing it in the hands of developers, researchers and students to solve the most pressing issues of our time." Today's collaboration is the latest between the two companies, building on MediaTek's work with NVIDIA to bring drivers and passengers novel experiences inside the car with new MediaTek Dimensity Auto Cockpit chips. MediaTek's Dimensity Auto Cockpit chips integrate NVIDIA's next-gen GPU-accelerated AI computing and NVIDIA RTX graphics. Additionally, MediaTek has integrated NVIDIA TAO, an AI model training and optimization toolkit, with MediaTek's NeuroPilot SDK to deliver advanced edge AI capabilities to IoT applications. As part of MediaTek's vision to bring AI everywhere, MediaTek is delivering advanced AI capabilities across its portfolio, including its Dimensity portfolio for smartphones and tablets, Genio family for IoT devices, Pentonic series for smart TVs, Kompanio line for Arm-based Chromebooks, along with the Dimensity Auto platform for vehicles. To learn more about the NVIDIA GB10 Superchip and Project DIGITS personal AI supercomputer, please visit https://www.nvidia.com/en-us/. About MediaTek Inc. MediaTek Incorporated (TWSE: 2454) is a global fabless semiconductor company that enables nearly 2 billion connected devices a year. We are a market leader in developing innovative systems-on-chip (SoC) for mobile, home entertainment, connectivity and IoT products. Our dedication to innovation has positioned us as a driving market force in several key technology areas, including highly power-efficient mobile technologies, automotive solutions and a broad range of advanced multimedia products such as smartphones, tablets, digital televisions, 5G, Voice Assistant Devices (VAD) and wearables. MediaTek empowers and inspires people to expand their horizons and achieve their goals through smart technology, more easily and efficiently than ever before. We work with the brands you love to make great technology accessible to everyone, and it drives everything we do. Visit www.mediatek.com for more information. Media Enquiries: Kevin Keating, pr@mediatek.com 

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 778 加入收藏 :
ASUS Announces Key Milestone with Nebius and Showcases NVIDIA GB300 NVL72 System at GTC Paris 2025

Accelerating AI with scalable performance and next-gen infrastructure KEY POINTS Breakthrough compute: ASUS unveils NVIDIA® GB300 NVL72 AI Factory solutions to accelerate training and inference at scale Scalable AI partnership: ASUS and Nebius deepen collaboration to deliver next-gen, NVIDIA® Blackwell-accelerated infrastructure Compact power: New desktops bring petaflop-class performance and support for 200-billion parameter models to the developer's desk TAIPEI, June 13, 2025 /PRNewswire/ -- ASUS today joined GTC Paris at VivaTech 2025 as a Gold Sponsor, highlighting its latest portfolio of AI infrastructure solutions and reinforcing its commitment to advancing the AI Factory vision with a full range of NVIDIA® Blackwell Ultra solutions, delivering breakthrough performance from large-scale datacenter to personal desktop. ASUS is also excited to announce a transformative partnership milestone in its partnership with Nebius. Together, the two companies are enabling a new era of AI innovation built on NVIDIA's advanced platforms. Building on the success of the NVIDIA GB200 NVL72 platform deployment, ASUS and Nebius are now moving forward with strategic collaborations featuring the next-generation NVIDIA GB300 NVL72 platform. This ongoing initiative underscores ASUS's role as a key enabler in AI infrastructure, committed to delivering scalable, high-performance solutions that help enterprises accelerate AI adoption and innovation. Andrey Korolenko, Chief Product and Infrastructure Officer at Nebius said: "We have collaborated with ASUS for many years and appreciate its impressive capability to deliver swift and efficient solutions. ASUS not only delivers consistently against our exacting technical requirements, but also demonstrates deep professional expertise in building AI infrastructure. The company's forward-thinking approach and technical excellence have been a key enabler for our projects, and we look forward to working together to deliver the next generations of AI infrastructure." AI servers: Building NVIDIA AI Factories for enterprise Leading the charge in AI advancement, ASUS is driving scalable, agentic AI through increased token generation. At GTC Paris, ASUS will unveil its latest AI Factory infrastructure solutions built on NVIDIA RTX PRO Servers as well as  the NVIDIA Grace Blackwell Ultra systems. The ASUS AI POD, built with theNVIDIA GB300 NVL72 system, delivers exceptional performance for complex AI inference tasks, making it ideal for advanced AI applications. Meanwhile, ASUS XA NB3I-E12, featuring the NVIDIA HGX B300 system, pushes the boundaries of AI computing with higher FLOPS and a massive 2.3TB of HBM3e memory — accelerating training and inference for large-scale models. To further address the growing demands of high-performance AI and HPC environments, ASUS introduced the new ESC8000A-E13P. This 4U NVIDIA MGX server supports up to eight NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, NVIDIA BlueField-3 DPUs, and NVIDIA ConnectX-8 SuperNICs with built-in PCIe® 6.0 switches, offering seamless integration, performance optimization, and scalability for modern data centers and agile IT deployments AI Inferencing: Enabling intelligent services at scale ASUS also unveiled powerful AI inference solutions, introducing a new lineup of workstations and a compact-sized supercomputer engineered to tackle today's most demanding workloads. Leading the range is the ExpertCenter Pro ET900N G3, the first system powered by the NVIDIA GB300 Grace Blackwell Ultra Superchip, with up to 784GB of large coherent memory. Another is the groundbreaking ASUS Ascent GX10, a compact AI supercomputer powered by the NVIDIA GB10 Grace Blackwell Superchip, that delivers 1,000 AI TOPS performance for demanding workloads. Equipped with a NVIDIA Blackwell GPU, 20-core Arm CPU, and 128GB of memory, it supports AI models up to 200-billion parameters, placing petaflop-scale inferencing capabilities on developers' desks. Designed from the ground up for AI, both products deliver exceptional performance for large-scale training and inference on a desktop. Combined with the NVIDIA AI software stack, it is purpose-built for teams that demand the best in AI development. ASUS: Proven expertise in AI infrastructure With server expertise dating back to 1995, ASUS delivers reliable, end-to-end infrastructure solutions —ranging from individual components to fully integrated systems — backed by world-class R&D and global manufacturing capabilities. Driven by the Ubiquitous AI, Incredible Possibilities vision, ASUS supports clients in accelerating their advancement in the global AI race. Through flexible customization, deep technical expertise, and a proven track record in deployment, ASUS empowers enterprises to scale AI initiatives with confidence and efficiency. AVAILABILITY & PRICING ASUS servers are available worldwide. Please visit https://servers.asus.com for more ASUS infrastructure solutions or please contact your local ASUS representative for further information. About ASUS ASUS is a global technology leader that provides the world's most innovative and intuitive devices, components, and solutions to deliver incredible experiences that enhance the lives of people everywhere. With its team of 5,000 in-house R&D experts, the company is world-renowned for continuously reimagining today's technologies. Consistently ranked as one of Fortune's World's Most Admired Companies, ASUS is also committed to sustaining an incredible future. The goal is to create a net zero enterprise that helps drive the shift towards a circular economy, with a responsible supply chain creating shared value for every one of us.

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 166 加入收藏 :
PolyU develops novel multi-modal agent to facilitate long video understanding by AI, accelerating development of generative AI-assisted video analysis

HONG KONG SAR - Media OutReach Newswire - 10 June 2025 - While Artificial Intelligence (AI) technology is evolving rapidly, AI models still struggle with understanding long videos. A research team from The Hong Kong Polytechnic University (PolyU) has developed a novel video-language agent, VideoMind, that enables AI models to perform long video reasoning and question-answering tasks by emulating humans' way of thinking. The VideoMind framework incorporates an innovative Chain-of-Low-Rank Adaptation (LoRA) strategy to reduce the demand for computational resources and power, advancing the application of generative AI in video analysis. The findings have been submitted to the world-leading AI conferences. A research team led by Prof. Changwen Chen, Interim Dean of the PolyU Faculty of Computer and Mathematical Sciences and Chair Professor of Visual Computing, has developed a novel video-language agent VideoMind that allows AI models to perform long video reasoning and question-answering tasks by emulating humans’ way of thinking. The VideoMind framework incorporates an innovative Chain-of-LoRA strategy to reduce the demand for computational resources and power, advancing the application of generative AI in video analysis. Videos, especially those longer than 15 minutes, carry information that unfolds over time, such as the sequence of events, causality, coherence and scene transitions. To understand the video content, AI models therefore need not only to identify the objects present, but also take into account how they change throughout the video. As visuals in videos occupy a large number of tokens, video understanding requires vast amounts of computing capacity and memory, making it difficult for AI models to process long videos. Prof. Changwen CHEN, Interim Dean of the PolyU Faculty of Computer and Mathematical Sciences and Chair Professor of Visual Computing, and his team have achieved a breakthrough in research on long video reasoning by AI. In designing VideoMind, they made reference to a human-like process of video understanding, and introduced a role-based workflow. The four roles included in the framework are: the Planner, to coordinate all other roles for each query; the Grounder, to localise and retrieve relevant moments; the Verifier, to validate the information accuracy of the retrieved moments and select the most reliable one; and the Answerer, to generate the query-aware answer. This progressive approach to video understanding helps address the challenge of temporal-grounded reasoning that most AI models face. Another core innovation of the VideoMind framework lies in its adoption of a Chain-of-LoRA strategy. LoRA is a finetuning technique emerged in recent years. It adapts AI models for specific uses without performing full-parameter retraining. The innovative chain-of-LoRA strategy pioneered by the team involves applying four lightweight LoRA adapters in a unified model, each of which is designed for calling a specific role. With this strategy, the model can dynamically activate role-specific LoRA adapters during inference via self-calling to seamlessly switch among these roles, eliminating the need and cost of deploying multiple models while enhancing the efficiency and flexibility of the single model. VideoMind is open source on GitHub and Huggingface. Details of the experiments conducted to evaluate its effectiveness in temporal-grounded video understanding across 14 diverse benchmarks are also available. Comparing VideoMind with some state-of-the-art AI models, including GPT-4o and Gemini 1.5 Pro, the researchers found that the grounding accuracy of VideoMind outperformed all competitors in challenging tasks involving videos with an average duration of 27 minutes. Notably, the team included two versions of VideoMind in the experiments: one with a smaller, 2 billion (2B) parameter model, and another with a bigger, 7 billion (7B) parameter model. The results showed that, even at the 2B size, VideoMind still yielded performance comparable with many of the other 7B size models. Prof. Chen said, "Humans switch among different thinking modes when understanding videos: breaking down tasks, identifying relevant moments, revisiting these to confirm details and synthesising their observations into coherent answers. The process is very efficient with the human brain using only about 25 watts of power, which is about a million times lower than that of a supercomputer with equivalent computing power. Inspired by this, we designed the role-based workflow that allows AI to understand videos like human, while leveraging the chain-of-LoRA strategy to minimise the need for computing power and memory in this process." AI is at the core of global technological development. The advancement of AI models is however constrained by insufficient computing power and excessive power consumption. Built upon a unified, open-source model Qwen2-VL and augmented with additional optimisation tools, the VideoMind framework has lowered the technological cost and the threshold for deployment, offering a feasible solution to the bottleneck of reducing power consumption in AI models. Prof. Chen added, "VideoMind not only overcomes the performance limitations of AI models in video processing, but also serves as a modular, scalable and interpretable multimodal reasoning framework. We envision that it will expand the application of generative AI to various areas, such as intelligent surveillance, sports and entertainment video analysis, video search engines and more." Hashtag: #PolyU #AI #LLMs #VideoAnalysis #IntelligentSurveillance #VideoSearchThe issuer is solely responsible for the content of this announcement.

文章來源 : Media OutReach Limited 發表時間 : 瀏覽次數 : 405 加入收藏 :
Autonomous Inc. Introduces Brainy: The Petaflop AI Workstation, Empowering a New Era of Deep Learning

RIVERSIDE, Calif., May 15, 2025 /PRNewswire/ -- Autonomous Inc., a company dedicated to design and engineer the future of work into the hands of innovators, today announced Brainy, a revolutionary workstation designed to accelerate deep learning and machine learning workflows. Brainy delivers unprecedented AI performance directly to the desktop, empowering researchers, developers, and AI startups to push the boundaries of artificial intelligence. Brainy is engineered to handle the most demanding AI workloads "Brainy is more than just a machine; it's a partner in innovation," stated Brody Slade, Autonomous' Product Manager. "We're putting petaflop-level AI power within reach, eliminating the bottlenecks and costs associated with cloud-based solutions and truly changing the way AI development is done. It empowers you to not just think—but to think with your machine." Unleashing Unprecedented Power and Scalability Brainy is engineered to handle the most demanding AI workloads, powered by NVIDIA RTX 4090 GPUs. The system delivers over a petaflop of AI performance and seamlessly scales from 2 to 8 GPUs. This enables users to: Prototype, fine-tune, and deploy massive AI models with up to 70 billion parameters. Achieve High-Performance Computing (HPC)-class results from the desktop. Optimize for both training and inference, supporting full forward and backward passes with autodiff. Brainy excels at fine-tuning Large Language Models (LLMs), computer vision tasks, and a wide range of deep learning models. The Brainy Advantage: Speed, Efficiency, and Cost-Effectiveness Autonomous Inc. developed Brainy to address the limitations of costly cloud-based GPU solutions. While cloud GPUs offer flexibility, the pay-as-you-go model can become prohibitively expensive, especially for sustained, large-scale projects. Brainy offers a compelling alternative with several key advantages: Petaflop Performance at Your Fingertips: Brainy provides dedicated, on-premise AI power, eliminating the latency and constraints of cloud computing. Enhanced Data Privacy and Security: By processing data locally, Brainy ensures sensitive information remains within your organization's control, eliminating the risk of data breaches and compliance issues associated with cloud storage. Cost Savings: By owning the hardware, users can achieve significant cost savings compared to cloud GPU rentals. Autonomous Inc. estimates that Brainy can save users thousands of dollars within the first year, compared to services like RunPod. Uninterrupted Workflow: Brainy eliminates common cloud-based frustrations, such as queuing times, spot instance shutdowns, and internet lag, ensuring pure, uninterrupted AI power. Scalability and Flexibility: Users can start with a configuration as small as 2x RTX 4090s and scale up to 8 GPUs as their needs evolve. When projects are ready for broader deployment, they can be scaled to the cloud or a data center with zero reconfiguration. Accelerating AI Innovation: Brainy empowers AI startups, developers, and researchers with industry-standard AI frameworks like ONNX, PyTorch, and TensorFlow, seamlessly integrated with NVIDIA's CUDA, cuDNN, and TensorRT. This enables rapid development of AI, ML, and scientific computing tasks. Autonomous Inc. is a proud member of the NVIDIA Inception Program, designed to support innovative startups like ours, specifically in the development of Brainy. This membership grants the company access to a wealth of valuable resources and expertise from NVIDIA, empowering the team at every stage of the journey in bringing Brainy to market, from initial development to market launch and beyond. Through the program, Autonomous gains access to benefits like free credits for NVIDIA's self-paced courses and discounts on instructor-led workshops. These training opportunities enable developers to sharpen their skills in key areas such as generative AI, graphics, and simulation, all of which are crucial for optimizing Brainy's performance. Furthermore, NVIDIA Inception Program provides Autonomous Inc. with exclusive offers, allowing us to deliver competitive pricing to our customers for Brainy. Startups Thrive with Brainy's AI Applications Brainy supercharges your AI data training ideas with full data ownership. It accelerates finance forecasting models, healthcare patient data analysis, education personalized learning, logistics route optimization, etc. Using top-notch hardware and frameworks, this supercomputer provides cost-effective, secure AI solutions across industries directly from the desktop. Availability Brainy is available for order, making enterprise-grade AI performance accessible to startups and innovators For detailed specifications, configurations, and pricing, please visit https://www.autonomous.ai/robots/brainy. About Autonomous Inc. Autonomous Inc. designs and engineers the future of work, empowering individuals who refuse to settle and relentlessly pursue innovation. By continually exploring and integrating advanced technologies, the company's goal is to create an ultimate smart office, including 3D-printed ergonomic chairs, configurable smart desks, and solar-powered work pods, as well as enabling businesses to create the future they envision with a smart workforce using robots and AI. Brainy ensures sensitive information remains within your organization's control

文章來源 : PR Newswire 美通社 發表時間 : 瀏覽次數 : 176 加入收藏 :
China Report ASEAN: Lighting Up Lives
發表時間 :
2025 年 7 月 14 日 (星期一) 農曆六月二十日
首 頁 我的收藏 搜 尋 新聞發佈