本網站使用瀏覽器紀錄 (Cookies) 來提供您最好的使用體驗,我們使用的 Cookie 也包括了第三方 Cookie。相關資訊請訪問我們的隱私權與 Cookie 政策。如果您選擇繼續瀏覽或關閉這個提示,便表示您已接受我們的網站使用條款。 關閉
BEIJING, Oct. 13, 2023 /PRNewswire/ -- WiMi Hologram Cloud Inc. (NASDAQ: WIMI) ("WiMi" or the "Company"), a leading global Hologram Augmented Reality ("AR") Technology provider, today announced that it developed a personalized video recommendation system based on deep learning according to the development needs of the industry, providing new ideas and directions for the research of personalized video recommendation under deep learning. The underlying technical logic of WiMi's deep learning-based personalized video recommendation system mainly includes the construction of neural network models, feature representation learning, model training and optimization, fusion of contextual information, real-time recommendation and online learning, and the interpretation and interpretability of recommendation results. The application of these technologies can improve the accuracy, degree of personalization, and user experience of the recommendation algorithm and provide users with better video recommendation services: Neural network models: At the heart of deep learning are neural network models. In personalized video recommendation, different types of neural network models are used to model the association between the user and the video. Neural network models include Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Long Short Term Memory Networks (LSTM). These models are able to perform nonlinear transformations and feature extraction through multiple layers of neuronal units to better capture the hidden associations between users and video content. Feature Representation Learning: In a personalized video recommendation system, effective feature representations are critical to the performance of the model. While traditional recommendation algorithms require features to be more programmatic and modular, deep learning-based approaches can automatically learn feature representations. By introducing structures such as the embedding layer or convolutional layer in neural networks, user and video features can be transformed into low-dimensional dense vectors to better capture their interactions. Model training and optimization: Deep learning models are usually trained using optimization algorithms such as gradient descent to minimize prediction errors. In personalized video recommendation, optimization algorithms such as stochastic gradient descent (SGD) or Adam are used to update model parameters. To improve the generalization ability of the model and prevent over-fitting, regularization techniques are used. Meanwhile, methods such as batch training or mini-batch training are used to accelerate the training process of the model. Fusion of contextual information: In personalized video recommendation, the user's interest and preference may be influenced by contextual information, such as time, location, device, etc. To make recommendations more accurate, contextual information is incorporated into deep learning models. An attention mechanism is used to dynamically adjust the weights between user and video features to reflect the current contextual information. Real-time recommendation and online learning: Personalized video recommendation needs to respond to user requests in real-time and make recommendations based on real-time behavioral data. Through online learning methods, the model is constantly updated and optimized to adapt to the real-time changes of users. Online learning is achieved through techniques such as incremental training or incremental updating incremental updating, so that the model can obtain the latest user behavioral data in time and make real-time adjustments and optimizations to the model. Recommendation result interpretation and interpretability: In personalized video recommendation, the user's interpretation and interpretability of the recommendation result are very important. In order to increase the interpretability of the recommendation results, techniques such as the attention mechanism and the inference mechanism to explain the generative model are used so as to explain the basis and reasons for the recommendation results to the user. It improves the user's understanding and acceptance of the recommendation results and enhances the user's trust and satisfaction. A practical application of WiMi's deep learning-based personalized video recommendation system. The core of the system is the recommendation module, which uses deep learning models to model user interests and generate personalized video recommendation results. In practical applications, other techniques and algorithms, such as content-based recommendation and social network analysis, can be combined to further improve the accuracy and diversity of personalized video recommendations. In addition, user feedback can be used to continuously optimize and update the recommendation model to meet the changing interests and needs of users. WiMi's deep learning-based personalized video recommendation technology solves information overload, personalizes user needs, improves user experience, and promotes market development in the online video industry. With the continuous progress of artificial intelligence and deep learning technology, personalized video recommendation technology can also be combined with other emerging technologies to develop more application directions. For example, combined with augmented learning technology, the recommendation system can further optimize the recommendation strategy through interactive learning with users; combined with virtual reality and augmented reality technology, the recommendation system can provide a more immersive video viewing experience. Personalized video recommendation technology can be combined with social media and user participation to provide a richer user experience. By analyzing users' social network information and interactive behaviors, the recommendation system can recommend videos related to their interests and promote communication and sharing among users. This model of social interaction and user participation can increase user stickiness and loyalty, and drive users to generate more content and spread word-of-mouth. About WIMI Hologram Cloud WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies. Safe Harbor Statements This press release contains "forward-looking statements" within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as "will," "expects," "anticipates," "future," "intends," "plans," "believes," "estimates," and similar statements. Statements that are not historical facts, including statements about the Company's beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company's strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission ("SEC") on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company's goals and strategies; the Company's future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company's expectations regarding demand for and market acceptance of its products and services. Further information regarding these and other risks is included in the Company's annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.
BEIJING, Oct. 11, 2023 /PRNewswire/ -- WiMi Hologram Cloud Inc. (NASDAQ: WIMI) ("WiMi" or the "Company"), a leading global Hologram Augmented Reality ("AR") Technology provider, today announced that it developed the holographic reconstruction network (HRNet) which has brought an important technological breakthrough in the field of hologram reconstruction. Holography has always played an important role in scientific research, medical imaging, industrial inspection and other fields. However, traditional hologram reconstruction methods face many challenges, such as the need for a priori knowledge, manual operation and complex post-processing steps. To address these problems, WiMi's innovative technology, HRNet, which is based on deep learning and holographic image processing, has end-to-end hologram reconstruction capabilities without the need for a priori knowledge and complex post-processing steps. The technology breaks through the limitations of traditional holographic reconstruction methods, realizing noise-free image reconstruction and phase imaging, which brings great potential to image processing, computer vision and other related fields. Holography is a technique that records the complete wavefront information of an object, including amplitude and phase. Conventional holographic reconstruction methods usually require a priori knowledge, such as object distance, angle of incidence, and wavelength, and require additional filtering operations to remove unwanted image information. In addition, phase imaging and processing of multi-section objects place higher demands on conventional methods. However, WiMi's HRNet overcomes these challenges by employing an end-to-end learning strategy with deep learning, bringing an innovative solution to holographic reconstruction. WiMi's HRNet employs a deep learning approach to address some of the challenges faced by traditional methods. Some of the key aspects of the technology are described below: End-to-end learning: HRNet uses an end-to-end learning strategy to learn and reconstruct directly from the original holograms. This means that the original hologram serves as input to the network without any prior knowledge or additional preprocessing steps. Deep residual networks: The network architecture employs deep residual learning. This means adding identity mappings between network layers to simplify the training process and speed up computation. This approach helps to solve the problem of vanishing/exploding gradients in deep neural networks. Noise-free reconstruction: HRNet is able to output noise-free reconstruction results, which means it can eliminate the problems caused by noise and distortion in traditional methods. This noise-free reconstruction helps to improve the quality and accuracy of reconstructed images. Phase imaging processing: HRNet can handle not only the reconstruction of amplitude objects, but also phase imaging. Conventional phase imaging requires compensation for phase aberration and additional unfolding steps to recover the true object thickness. HRNet is able to reconstruct phase information directly from holograms by learning the processing steps of phase imaging. Multi-cross-section object processing: HRNet can also handle the reconstruction of multi-cross-section objects, extending the application's degrees of freedom. This means that it is capable of generating full-focus images and depth maps, meeting the need for multi-dimensional data in many applications. WiMi's HRNet utilizes a deep learning and end-to-end learning approach to achieve noise-free image reconstruction by learning an internal representation of the holographic reconstruction that handles the needs of both phase imaging and multi-section objects. This data-driven approach eliminates the reliance on a priori knowledge and additional processing steps, providing a new and effective framework for digital holographic reconstruction. The core of WiMi's HRNet is to utilize the power of deep learning to reconstruct holograms without the need for any a priori knowledge or tedious pre-processing steps. This means that the original hologram serves as the input to the network, which automatically learns the necessary processing steps in holographic reconstruction and establishes pixel-level connections between the original hologram and backpropagation. This data-driven approach eliminates the reliance on a priori knowledge and additional processing steps, making the reconstruction process more efficient and accurate. In HRNet, WiMi's research team used a deep residual learning approach to design the network architecture. This approach adds identity mapping between network layers, simplifying the training process and speeding up computation. This moderately deep network architecture is able to have sufficient fitting capability while avoiding excessive computational load, achieving a delicate balance between performance and training load. HRNet is able to output noise-free reconstruction results, which improves the quality and accuracy of the reconstructed images. This is important for many applications, especially for fields such as medical imaging, industrial inspection, and scientific research where high quality images are required. Noise and distortion are often one of the main reasons for the degradation of reconstructed image quality in traditional methods, while HRNet is able to eliminate these problems and provide noise-free reconstruction results through a deep learning approach. In addition to handling reconstruction of amplitude objects, WiMi's HRNet has the ability to handle phase imaging and multi-section objects, thus further extending the freedom of application. While traditional phase imaging methods require compensation for phase aberration and an unfolding step, HRNet is able to reconstruct phase information directly from holograms by learning the processing steps of phase imaging. This provides a more simplified and efficient solution for phase imaging. For multi-section object processing, WiMi's HRNet is capable of generating full-focus images and depth maps to fulfill the need for multi-dimensional data in many applications. This is important for 3D image reconstruction in the medical field, depth perception in automated driving, and surface topography analysis in industrial inspection, etc. HRNet's ability to process multi-section objects brings greater flexibility and accuracy to these applications. In addition, WiMi also hopes to promote the integration of holographic technology with other fields through the development of HRNet. For example, in the field of autonomous driving, HRNet can provide more accurate data for depth perception and environment understanding, improving driving safety and intelligence. In the field of AR and VR, HRNet can provide more realistic and lifelike image reconstruction for immersive experiences, enhancing user experience and interactivity. WiMi will continue its research and development on HRNet to further enhance its performance and functionality. They will continue to improve the network architecture and training algorithms to enable HRNet to handle more complex scenes and objects. At the same time, they will also explore integration with other cutting-edge technologies, such as artificial intelligence, machine learning and big data analysis, to further enhance the capabilities and applications of hologram reconstruction. As a cutting-edge technology, holography is changing our perception of images and vision. WiMi has been committed to the development of holographic technology, and with the continuous development and application of deep learning technologies such as HRNet, holographic technology will show greater potential and influence in various fields. The noise-free reconstruction and phase imaging capabilities of holograms will bring more accurate, high-quality data and information to medicine, industry, science and other fields. This will promote innovation and development in various industries, advance technological progress and social progress, and bring more value and opportunities to society. About WIMI Hologram Cloud WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies. Safe Harbor Statements This press release contains "forward-looking statements" within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as "will," "expects," "anticipates," "future," "intends," "plans," "believes," "estimates," and similar statements. Statements that are not historical facts, including statements about the Company's beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company's strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission ("SEC") on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company's goals and strategies; the Company's future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company's expectations regarding demand for and market acceptance of its products and services. Further information regarding these and other risks is included in the Company's annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws. Contacts WIMI Hologram Cloud Inc.Email: pr@wimiar.comTEL: 010-53384913 ICR, LLCRobin YangTel: +1 (646) 975-9495 Email: wimi@icrinc.com
BEIJING, Oct. 9, 2023 /PRNewswire/ -- WiMi Hologram Cloud Inc. (NASDAQ: WIMI) ("WiMi" or the "Company"), a leading global Hologram Augmented Reality ("AR") Technology provider, today announced that WiMi researched on AIGC intelligent interactive interface generation system based on big data. This is a system that utilizes large-scale datasets and artificial intelligence algorithms to automatically generate intelligent interactive interfaces. Big data plays a crucial role in the AIGC intelligent interactive interface generation system based on big data researched by WiMi. Big data does not only refer to a large amount of data, but more importantly, it contains multiple types of data and needs to be processed and analyzed by complex algorithms. First, the system collects a large amount of user data, including but not limited to user usage, search history, preferences and so on. Then, the system gives these data to AI models for training, from which it learns the user's favorite elements and design styles, and the system automatically generates the corresponding code and design according to the needs and ideas provided by the user, in order to quickly build a high-quality intelligent interactive interface. Finally, the system will continuously improve the algorithm and enhance the quality of the generated content based on the user's feedback information and behavioral data, and optimize the generated content and update the algorithm model to better meet the user's needs. This system is a complex system whose core modules include a data collection and processing module, AI model training module, code generation and design module, optimization and update module, and interface display and testing module. These modules cooperate with each other to complete the function of the whole system, which can help designers and developers quickly build high-quality intelligent interactive interfaces and improve work efficiency and productivity. Data collection and processing Collect user data from various data sources and process and filter this data. The data sources can include websites, applications, social media, etc. Through data collection and processing, the system can better understand the user's preferences and needs, and provide a basis for the subsequent generation of intelligent interactive interfaces. AI model training Machine learning algorithms are used to analyze and model previously collected and processed user data to train an AI model. During the training process, the system learns the user's preferred elements, design styles and interaction methods. After training, the AI model can automatically generate the appropriate code and design based on the requirements and ideas provided by the user. Code generation and design This module is one of the most core modules of the whole system. When the user provides data and ideas, the system will automatically generate the corresponding code and design according to the AI model. In the process of generating codes and designs, the system will consider the user's actual needs, equipment compatibility, response speed and many other factors to ensure that the generated codes and designs can meet the user's needs. Model optimization and update By collecting user feedback and behavioral data, the system will continuously improve the algorithms and enhance the quality of generated content to achieve a better user experience. At the same time, the system also needs to update the AI model to adapt to the changing user needs and technical environment. Intelligent interactive demonstration and testing This module is responsible for displaying the generated intelligent interactive interface to the user, as well as testing and evaluating it. In this process, the system needs to consider various aspects such as user experience, interaction effect and performance to ensure that the generated intelligent interactive interface can meet the user's expectations and requirements. At the same time, the system will optimize and adjust the intelligent interactive interface to solve latent problems and defects. The AIGC intelligent interaction interface generation technology based on big data researched by WiMi is a technological innovation with great potential, which utilizes artificial intelligence technology and big data analysis technology to achieve the goal of rapidly constructing high-quality intelligent interaction interfaces and forming smarter human-computer intelligent interaction modes. It has a broad application prospect and can be applied to various types of products and services, including websites, applications, games and other fields. For example, it can be used in virtual reality and augmented reality technologies to enhance user immersion and experience by automatically generating realistic scenes and models. In addition, it can also be applied to the Internet of Things (IoT) fields, such as smart home, to realize remote control and management of home devices by generating intelligent control panels. In the future, with the continuous development and improvement of technology, it is believed that the AIGC intelligent interactive interface generation technology based on big data will become an important tool in digital transformation. About WIMI Hologram Cloud WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies. Safe Harbor Statements This press release contains "forward-looking statements" within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as "will," "expects," "anticipates," "future," "intends," "plans," "believes," "estimates," and similar statements. Statements that are not historical facts, including statements about the Company's beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company's strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission ("SEC") on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company's goals and strategies; the Company's future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company's expectations regarding demand for and market acceptance of its products and services. Further information regarding these and other risks is included in the Company's annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.
BEIJING, Oct. 6, 2023 /PRNewswire/ -- WiMi Hologram Cloud Inc. (NASDAQ: WIMI) ("WiMi" or the "Company"), a leading global Hologram Augmented Reality ("AR") Technology provider, today announced that it developed digital twin modelling technology based on multiple data sources to build more comprehensive, accurate, and reliable digital twin models. This technology refers to the integration of data from different sources into a unified model. In digital twin modelling, the multiple data source integration technique can help us obtain more comprehensive and accurate data, thus improving the precision and reliability of the digital twin model. The key modules of the integrated digital twin modelling system based on multiple data sources include data acquisition and pre-processing, data integration and consolidation, model development and training, model deployment and real-time updating, and visualization and analysis, etc., which are interdependent and interact with each other, and collectively constitute the key aspects of the integrated digital twin modelling technology. First, the system will collect data from multiple data sources and pre-process and clean them to ensure the quality and consistency of the data, including data cleansing, data conversion, data merging and other operations. Then the data from different data sources will be integrated into a unified data model. This may require operations such as data mapping, data transformation, and data integration to ensure that data between different data sources can be effectively correlated and analyzed. The development of the model for digital twin modelling is then carried out and the integrated data is used for model training and optimization by selecting appropriate modelling algorithms, defining the structure and parameters of the model, and using the training data to train and validate the model. Next, the trained model is deployed to a real-time environment and receives and processes data from different data sources in real-time. This may involve operations such as the deployment of the model, real-time transmission of data, and real-time updating of the model to ensure that the digital twin model reflects real-world changes in real time. This module is responsible for visualizing and analyzing the results of the digital twin model so that users can understand and utilize the output of the model, and providing the application of visualization tools and analytical algorithms to support users' understanding and decision-making on the model results. With access to more data sources and more complex data integration requirements, future digital twin modelling techniques may need to deal with multi-modal data, including different forms of data such as image, sound, and video. Multiple data sources integration needs to be able to process and analyze this multi-modal data to more fully model and predict real-world behaviour. Future digital twin modelling technologies are also likely to be more automated and intelligent, and by combining machine learning, artificial intelligence, and automation technologies, it will be possible to automate the data integration and modelling process to improve the accuracy and efficiency of the models. There will also be more focus on real-time data processing and real-time updating of models to more accurately reflect changes in the real world, as well as cross-domain applications and integration between different domains to achieve a more comprehensive and holistic digital twin model, all of which are future trends in the digital twin modelling technology based on multiple data sources. The rapid development of big data, cloud computing, the Internet of Things and other technologies has significantly improved data acquisition, storage and processing capabilities, which provides the technical basis and support for the realization of the digital twin modelling technology with multiple data sources. The digital twin modelling technology with multiple data sources researched by WiMi has a wide range of application prospects in many fields, such as industrial Internet, smart city, virtual reality and so on. With the continuous progress of data acquisition and processing technology, as well as the increasing demand for intelligent and sustainable development, this technology will be further developed and innovated. About WIMI Hologram CloudWIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies. Safe Harbor StatementsThis press release contains "forward-looking statements" within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as "will," "expects," "anticipates," "future," "intends," "plans," "believes," "estimates," and similar statements. Statements that are not historical facts, including statements about the Company's beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company's strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission ("SEC") on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company's goals and strategies; the Company's future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company's expectations regarding demand for and market acceptance of its products and services. Further information regarding these and other risks is included in the Company's annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.
The international video streaming platform will be rolling out a series of highly anticipated Chinese and local dramas as well as CHUANG ASIA Thailand in the Indonesian and Thai markets in 2024. JAKARTA, Indonesia and BANGKOK, Oct. 2, 2023 /PRNewswire/ -- WeTV, the international streaming service launched by Tencent Video, introduced an exciting line-up of new shows in Thailand and Indonesia at its flagship "WeTV Always More 2024" event. 2024 will see WeTV bringing new and interesting shows to captivate their audiences in Thailand and Indonesia. Alongside the announcement, the company also introduced Chinese megastar, Zhao Lusi as WeTV Global Brand Ambassador. WeTV CHUANG ASIA Thailand WeTV Indonesia Always More 2024 Zhao Lusi, WeTV Global Brand Ambassador WeTV has since been providing content from China, Korea, and Japan to local markets for several years. WeTV is committed to producing additional local content in Indonesian and Thai to cater to fans' growing appetite for diverse, high-quality shows, as well as to support local talent and increase their presence in the global entertainment market. Kaichen LI, Head of WeTV, stated "WeTV is fully committed to delivering the finest entertainment experience to our audiences. Our unyielding dedication drives us to work closely with our partners, as we strive to offer premium content that resonates with our viewers. We remain excited about the opportunities ahead and look forward to continuing our journey of bringing high-quality entertainment to our audiences." For Thailand, WeTV announced the launch of CHUANG ASIA's Thai edition with Jackson Wang as Lead Mentor. The Asia's pioneering idol survival show originated by Tencent Video will search for the first idol girl group in Thailand to debut internationally under RYCE Entertainment, cofounded by Jackson Wang and Daryl K. WeTV also announced a collaboration with the show's esteemed co-investors, namely one31, GMMTV, Have Fun Media, RYCE Entertainment, and 411 Entertainment. This partnership aims to elevate the show's production quality and facilitate the development of world-class capabilities for the upcoming debut of talented individuals. CHAUNG ASIA will start airing in February 2024 with global viewability on WeTV, plus broadcasting on one31 channel in Thailand and on selected TV channels in Southeast Asia. In addition to the lighthearted romantic comedy series Intern in My Heart created by BRAVO! Studios, a subsidiary of GMM Studios International, which will be available on WeTV before the end of 2023, WeTV Thailand also unveiled part of its 2024 content line-up, including teen comedy Knock Knock, Boys!, featuring Best-Vittawin, Seng-Wichai, Jaonine-Jiraphat and Nokia-Chinnawat. It also introduced romantic comedy Monster Next Door, which is based on a famous internet novel guaranteed by the acclaimed Thai director Lit-Phadung Samajarn. It also announced Fake Po, its first ORIGINAL series produced by Mandeework and based on an internet novel that has been read by millions of fans. Meanwhile in Indonesia, WeTV has announced six exclusive original productions from WeTV Indonesia. The WeTV Original Indonesian titles are The Two Faces of Arjuna (Dua Wajah Arjuna), Hand in Marriage (Kawin Tangan), Don't Blame Me for Cheating (Jangan Salahkan Aku Selingkuh), Should Get Married (Harus Kawin), The Death of Love (Cinta Mati), and Playing with Fire (Main Api). In short, WeTV Indonesia is gearing up to present a diverse entertainment experience for viewers. WeTV also announced twelve highly anticipated Chinese titles including As Beautiful A s You, Love has Fireworks, The Last Immortal, and The Legend of Shen Li. As a leading streaming service for Asian content, WeTV continues to strengthen its position as the go-to platform for Chinese dramas and movies with its vast library of high-quality content. WeTV is available on desktop and can also be downloaded from mobile apps, either on Android (Google Play) or iOS (App Store) devices. About WeTV WeTV is an Asian streaming service that sees the creation of premier video-on-demand (VOD) and provides over-the-top (OTT) local content. The streaming service provides content from around the region, including selected Chinese, Indonesian, Korean, Malaysian, Philippines and Thai series and movies. Operating as a freemium service, viewers can access some content without the need for a paid subscription and premium content at a small fee. The basic features also include free subtitles. WeTV is available on the browser at wetv.vip , or on the WeTV application that can be downloaded from the Apple Store for iOS users and the Google Play Store for Android users.
BEIJING, Oct. 2, 2023 /PRNewswire/ -- WiMi Hologram Cloud Inc. (NASDAQ: WIMI) ("WiMi" or the "Company"), a leading global Hologram Augmented Reality ("AR") Technology provider, today announced that it applied multi-level simulation to the digital twin modeling and it is actively explored the application of multilevel simulation digital twin modeling technology. Multi-level simulation digital twin modeling technology can abstract and model the features and behaviors of physical entities at different levels, and the modeling at each level can include different details and precision to meet the needs of different application scenarios, thus forming a hierarchical digital model, while interconnecting and interacting the different levels of the digital twin model with each other to achieve comprehensive modeling and simulation of the physical system, so that the digital twin model can better reflect the complexity and dynamics of entities and provide more accurate and comprehensive information for decision makers. In the multi-level simulation digital twin modeling technology studied by WiMi, the key modules include data acquisition and processing, model building and calibration, and simulation and optimization, which cooperate with each other to build a multi-level digital twin system. Among them, the data acquisition and processing module is mainly responsible for collecting sensor data from physical entities, and processing and analyzing the data to extract useful information. The model building and calibration module is mainly responsible for building digital models based on the characteristics and behaviors of physical entities, and calibrating and optimizing them through data interaction with the entities, while the simulation module is responsible for simulating the operation of the entities and predicting and optimizing them based on the simulation results. Data collection and integration: This module is responsible for collecting data from the actual system and integrating it with the digital twin model. This includes steps such as sensor data acquisition, data pre-processing, data cleaning and data alignment. Through data acquisition and integration, the digital twin model can be synchronized and updated with the actual system. Multi-level model coupling: This module is used to connect and interact digital twin models at different levels with each other. This can be realized through data transfer, parameter transfer, state transfer and so on. Through multilevel model coupling, information and feedback can be transferred between models at different levels to realize the overall simulation and analysis of the system. Simulation engine: The simulation engine is the core component that performs digital twin modeling and simulation. It is responsible for performing simulation calculations, state updates, event processing and other tasks of the model. The simulation engine can select appropriate simulation algorithms and numerical computation methods according to the characteristics and requirements of the model to achieve efficient and accurate simulation results. Visualization and interactive interfaces: The visualization and interactive interfaces module is used to present the results of the digital twin model to the user in a visual form and provide the ability for the user to interact with the model. This can be achieved through diagrams, images, animations, virtual reality, etc. The interface allows the user to explore different scenarios, parameters and decisions in order to make more accurate predictions and optimizations. Analysis and decision support: The analysis and decision support module is used to analyze the results of the digital twin model and provide decision support. Statistical analysis, trend analysis, and optimization algorithms of the model results provide insight into the state, performance, and change trends of the system. Based on these analysis results, decisions can be made, system operations can be optimized, and future behavior can be predicted. Through the multi-level simulation digital twin modeling technology, users can fully understand and analyze all levels and aspects of the physical system, thus enabling decision makers to conduct a comprehensive and accurate analysis and assessment of the entity's operation, helping them to make better decisions and optimization strategies. At the same time, digital twin technology also provides new means and methods for monitoring, maintenance, and improvement of entities, improving their operational efficiency and reliability. Multi-level simulation digital twin modeling technology can also fuse information from different data sources, including experimental data, sensor data and simulation data. At the same time, multi-source data integration can also make up for the problem of incomplete or inaccurate data and improve the reliability of modeling. Multi-level simulation digital twin modeling technology has a wide range of applications and prospects in the future. It will play an important role in intelligent manufacturing, urban planning, healthcare, transportation and other fields, providing scientific support and decision-making reference for the development of various industries. About WIMI Hologram Cloud WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies. Safe Harbor Statements This press release contains "forward-looking statements" within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as "will," "expects," "anticipates," "future," "intends," "plans," "believes," "estimates," and similar statements. Statements that are not historical facts, including statements about the Company's beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company's strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission ("SEC") on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company's goals and strategies; the Company's future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company's expectations regarding demand for and market acceptance of its products and services. Further information regarding these and other risks is included in the Company's annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.
A12 藝術空間
Broadcast Technology
請先登入後才能發佈新聞。
還不是會員嗎?立即 加入台灣產經新聞網會員 ,使用免費新聞發佈服務。 (服務項目) (投稿規範)