<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Man yin Mandy Wong</title>
    <description>The latest articles on DEV Community by Man yin Mandy Wong (@mandywong720).</description>
    <link>https://dev.to/mandywong720</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mandywong720"/>
    <language>en</language>
    <item>
      <title>Overview of Tencent Cloud COS Security Solution</title>
      <dc:creator>Man yin Mandy Wong</dc:creator>
      <pubDate>Thu, 17 Nov 2022 02:58:58 +0000</pubDate>
      <link>https://dev.to/mandywong720/overview-of-tencent-cloud-cos-security-solution-5d9n</link>
      <guid>https://dev.to/mandywong720/overview-of-tencent-cloud-cos-security-solution-5d9n</guid>
      <description>&lt;p&gt;Undoubtedly, all enterprises and individuals regard data security as a major consideration when choosing a cloud storage service.&lt;/p&gt;

&lt;p&gt;This article describes how to use the pre-event protection, mid-event monitoring, and post-event tracing methods provided by Tencent Cloud COS to ensure the security of your data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pre-event protection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Permission isolation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After migrating to the cloud, you should keep account security and reasonable resource authorization in mind when building a comprehensive protection system. To manage cloud resources properly, authorization should avoid the following risks:&lt;/p&gt;

&lt;p&gt;• Use of Tencent Cloud root accounts to perform routine operations.&lt;/p&gt;

&lt;p&gt;• Excessive permissions granted to sub-accounts.&lt;/p&gt;

&lt;p&gt;• No account permission management system and process.&lt;/p&gt;

&lt;p&gt;• Failure to regularly audit and manage user permissions and login information.&lt;/p&gt;

&lt;p&gt;• No access control over high-permission sub-accounts and risky operations.&lt;/p&gt;

&lt;p&gt;Tencent Cloud CAM takes various measures such as account level and permission level to ensure that permissions are clear, secure, and controllable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Object lock&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For core sensitive data like financial transactions and medical images, the object lock feature can be used to prevent uploaded files from being deleted or altered.&lt;/p&gt;

&lt;p&gt;After this feature is configured, all data in the bucket will become read-only and cannot be overwritten or deleted during the configured validity period. This operation will take effect for all CAM users including root accounts and anonymous users.&lt;/p&gt;

&lt;p&gt;This feature is currently in beta test. To try it out, submit a ticket for application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Data disaster recovery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;COS provides diversified data management features such as data encryption, versioning, cross-region replication, and lifecycle.&lt;/p&gt;

&lt;p&gt;• Data encryption can guarantee the data read/write security for sensitive files.&lt;/p&gt;

&lt;p&gt;• Versioning and cross-region replication can be used to implement remote disaster recovery, guarantee data durability, and ensure that data can be recovered from the backup when deleted mistakenly or maliciously.&lt;/p&gt;

&lt;p&gt;• The lifecycle can be used to transition and delete data to reduce storage costs.&lt;/p&gt;

&lt;p&gt;Versioning can also protect files from being overwritten or deleted. After it is enabled, all writes to a file will create different versions of the file, and a delete marker will be added when the file is deleted. You can access data from any version and roll back data by specifying the version number, which eliminates the risks of accidental data deletion and overwriting.&lt;/p&gt;

&lt;p&gt;Cross-region replication helps you replicate all incremental files to IDCs in other regions over a dedicated tunnel to implement remote disaster recovery. Data deleted from the primary bucket can be recovered from the backup bucket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mid-event monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;COS offers the event notification feature based on SCF.&lt;/p&gt;

&lt;p&gt;For risky operations such as "DeleteObject", you can configure SCF functions to receive notifications by email or SMS as soon as such operations are performed. This helps you promptly detect and respond to risks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Post-event tracing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;COS allows you to easily monitor and audit logs in various ways.&lt;/p&gt;

&lt;p&gt;Bucket access operations such as file deletion (DeleteObject), file overwriting (PutObjectCopy), and file permission modification (PutObjectACL) can be traced through the bucket access log feature, and risky operations such as deletion can also be traced and verified.&lt;/p&gt;

&lt;p&gt;Bucket configuration and management operations such as bucket deletion (DeleteBucket), bucket ACL modification (PutBucketACL), and bucket policy modification (PutBucketPolicy) can be traced through CloudAudit logs, and permission configurations and modifications can also be traced and verified.&lt;/p&gt;

&lt;p&gt;Read more at: &lt;a href="https://www.tencentcloud.com/dynamic/blogs/sample-article/100389" rel="noopener noreferrer"&gt;https://www.tencentcloud.com/dynamic/blogs/sample-article/100389&lt;/a&gt;&lt;/p&gt;

</description>
      <category>gratitude</category>
    </item>
    <item>
      <title>Tencent Cloud Native HDFS - Cornerstone of Cloud Big Data Storage-Computing Separation</title>
      <dc:creator>Man yin Mandy Wong</dc:creator>
      <pubDate>Wed, 16 Nov 2022 02:57:03 +0000</pubDate>
      <link>https://dev.to/mandywong720/tencent-cloud-native-hdfs-cornerstone-of-cloud-big-data-storage-computing-separation-3l40</link>
      <guid>https://dev.to/mandywong720/tencent-cloud-native-hdfs-cornerstone-of-cloud-big-data-storage-computing-separation-3l40</guid>
      <description>&lt;p&gt;Object storage is a widely used cloud-based unstructured data storage solution. An increasing amount of unstructured data is aggregated in the data lakes of object storage services, creating a demand for big data analysis. However, for storage systems oriented to such analysis, HDFS APIs are the de facto standard, and HDFS is the storage cornerstone of big data ecosystems.&lt;/p&gt;

&lt;p&gt;Native object storage APIs are not compatible with HDFS and therefore cannot be directly used. To support big data scenarios with computing-storage separation enabled, object storage usually provides a simulation layer to implement the translation from HDFS semantics to object storage semantics. Typical implementations include S3N and COSN. However, as such implementations don't authentically support file system APIs, the flat directory structure based on object storage cannot implement hierarchical namespaces. It is extremely inefficient in processing operations such as RENAME since it actually copies all associated objects based on the prefix. It also has a high latency in scenarios involving frequent metadata operations like LIST and HEAD. In addition, some object storage systems lack strong consistency semantics and thus cannot guarantee the read consistency after write, causing errors in the upper-layer big data computing framework.&lt;/p&gt;

&lt;p&gt;Moreover, in terms of data flow, common file appending operations are also not supported by the simulation layers of S3N and COSN. To support big data storage-computing separation scenarios, the cloud storage system should be redesigned to serve as an efficient and reliable storage cornerstone for cloud big data computing, meeting the requirements for metadata operations while implementing unlimited storage.&lt;/p&gt;

&lt;p&gt;In view of this, Tencent Cloud has launched Cloud Native HDFS (CHDFS), a COS-based general distributed file system design solution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. CHDFS overview&lt;/strong&gt;&lt;br&gt;
CHDFS builds a scalable metadata layer in COS by giving full play to the cloud to support HDFS semantics. With the highly optimized metadata layer, it allows efficient access to massive amounts of metadata. It can handle far more metadata than native HDFS while delivering almost the same performance as HDFS. In addition, it comes with a Java client optimized for data flow read/write, which makes the most of COS while enabling efficient metadata operations. CHDFS implements file system semantics based on COS and hosts data in COS as a disk, where a distributed massive metadata layer is built for the file system. Hosting data in COS also enables CHDFS to enjoy the strengths of COS, such as low costs, high reliability, throughput, and availability, and petabyte-level storage capacity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. CHDFS benefits&lt;/strong&gt;&lt;br&gt;
CHDFS adopts a distributed architecture and incorporates many optimizations for metadata read/write. It supports tens of billions of files, overcoming the capacity limit of HDFS NameNode and ensuring strong consistency semantics. Compared with COS and HDFS, CHDFS has the following benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It supports millisecond-level atomic RENAME operations for both directories and files.&lt;/li&gt;
&lt;li&gt;It features strong metadata consistency, so that data becomes visible immediately after being written.&lt;/li&gt;
&lt;li&gt;It supports tens of billions of files, far more than HDFS, and has almost the same latency as HDFS.&lt;/li&gt;
&lt;li&gt;It is a single file system that supports over 100,000 QPS for metadata, meeting the requirements for high concurrency in large-scale computing scenarios.&lt;/li&gt;
&lt;li&gt;It is highly available and can complete HA switch in seconds.&lt;/li&gt;
&lt;li&gt;Thanks to the parallel loading of metadata, it can be cold-started much faster than HDFS.&lt;/li&gt;
&lt;li&gt;It supports cross-region/AZ replication of metadata, further increasing the reliability.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;CHDFS offers multiple metadata engines for your choice based on your business needs in different scenarios, helping you strike a balance between cost, capacity, and performance. As for API, it is fully compatible with HDFS, so you can easily migrate data between the two systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. COS as a database for CHDFS&lt;/strong&gt;&lt;br&gt;
As a basic cloud storage service, COS acts as a solid database for CHDFS. CHDFS file data is stored in COS after being divided into parts, which has the following strengths:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Hundreds of petabytes of data can be stored, and the capacity can be expanded automatically.&lt;/li&gt;
&lt;li&gt;Tbps-level bandwidth is supported, giving full play to the high throughput of COS in big data computing.&lt;/li&gt;
&lt;li&gt;Data can be stored across AZs, delivering an eleven nines reliability.&lt;/li&gt;
&lt;li&gt;Data is encoded using erasure coding by default, further reducing the storage costs.&lt;/li&gt;
&lt;li&gt;File data can be replicated across regions.&lt;/li&gt;
&lt;li&gt;INTELLIGENT TIERING is supported, which automatically transitions data based on the data access frequency to further lower the storage costs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Furthermore, CHDFS provides an HDFS-compatible high-performance SDK for Java, which is comprehensively optimized for big data scenarios to implement an efficient read/write caching mechanism based on the data flow feature of COS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Abundant features&lt;/strong&gt;&lt;br&gt;
In addition to the strong file read/write capabilities mentioned above, CHDFS also has abundant features to meet your diversified requirements in big data scenarios. In terms of cost optimization, its storage lifecycle management feature automatically transitions files to cheaper storage media after simple configurations, further reducing cloud storage costs. When you need to access cold data, you can use its simple yet powerful command line tool to retrieve files to the hot tier.&lt;/p&gt;

&lt;p&gt;To help you better understand file metadata details, the powerful file inventory feature of CHDFS allows you to export the inventory of files in the specified format based on the specified filter fields and ship it to your file system. Then, you can read it to analyze business file attributes in multiple dimensions such as average file size. You can even use it as a means of file verification during data import from your local HDFS into CHDFS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Ecosystem integration&lt;/strong&gt;&lt;br&gt;
CHDFS offers a protocol fully compatible with HDFS, which can seamlessly support popular big data computing frameworks, including Hive, Spark, Presto, and Flink. Currently, CHDFS is closely integrated with Tencent Cloud EMR. After purchasing CHDFS, you can directly use it in EMR with no need to install any environment. This makes it easier for you to get started with CHDFS.&lt;/p&gt;

&lt;p&gt;Read more at: &lt;a href="https://www.tencentcloud.com/dynamic/blogs/sample-article/100387"&gt;https://www.tencentcloud.com/dynamic/blogs/sample-article/100387&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cos</category>
      <category>database</category>
      <category>chdfs</category>
      <category>datastorage</category>
    </item>
    <item>
      <title>Tencent Cloud COS + CI for a Comprehensive Image Solution</title>
      <dc:creator>Man yin Mandy Wong</dc:creator>
      <pubDate>Thu, 10 Nov 2022 03:16:07 +0000</pubDate>
      <link>https://dev.to/mandywong720/tencent-cloud-cos-ci-for-a-comprehensive-image-solution-2672</link>
      <guid>https://dev.to/mandywong720/tencent-cloud-cos-ci-for-a-comprehensive-image-solution-2672</guid>
      <description>&lt;p&gt;In daily development, you inevitably need to store images such as user profile photos and chat images. A common practice is to directly store such images on the server. However, it is a better idea to store website images and other static resources in a cloud storage service, retain backup files locally, respond to read requests in the cloud, and add a layer of CDN. By doing so, you can separate storage from computing, which makes it easier for you to manage your website and make your website faster to load. To implement separate storage, the storage service must be stable and reliable, and Tencent Cloud COS is your ideal choice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;• What is COS?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cloud Object Storage (COS) is a distributed storage service with no directory hierarchy or data format restrictions. It allows you to store massive amounts of data and view data objects in the cloud at any time over HTTP or HTTPS. In this way, it delivers a data storage solution featuring high scalability, reliability, and security at low costs to both enterprise and individual users. It offers four object storage classes based on the access frequency. This article takes STANDARD, the default storage class, as an example to describe the high data durability, availability, and performance implemented by COS.&lt;/p&gt;

&lt;p&gt;You may want to perform processing operations on the images stored in the cloud, for example, adding watermarks, cropping images, and moderating content to detect pornographic, politically sensitive, or terrorism information. Traditionally, you have to call APIs of other separate services to this end, but now you can directly leverage the CI service integrated into COS to process images with speed and ease. CI comes with a rich set of features, such as image processing, moderation, and recognition, enabling you to treat media data in COS by simply calling corresponding CI APIs. This pushes the boundaries of COS and makes it more than just a storage system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;• What is CI?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cloud Infinite (CI) is a professional integrated image solution provided by Tencent Cloud. It has a wide variety of features, including image upload, download, storage, processing, and recognition, and opens up Qzone's decade of technical expertise in image services.&lt;/p&gt;

&lt;p&gt;CI offers customized image recognition services as well as flexible image processing services such as cropping, compression, watermarking, and transcoding. Its secure, stable, and efficient cloud data processing capabilities fully meet your diverse needs for media processing in various business scenarios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;• Use cases&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To call CI in COS for basic image processing such as cropping, compression, and watermarking, you don't even need to write any code; instead, you can simply splice URL parameters as follows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Basic image processing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To call basic image processing features, you don't need to use SDKs; instead, you can simply splice URL parameters as instructed at &lt;a href="https://cloud.tencent.com/document/product/460/6924"&gt;https://cloud.tencent.com/document/product/460/6924&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Rotation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CI uses the "imageMogr2" API to rotate an image by a specified angle or automatically. You can visit the sample address in the document to rotate an image by 90 degrees clockwise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Gaussian blurring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CI uses the "imageMogr2" API to blur an image.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Image/Text watermarking&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Watermarking is a frequently used feature. CI provides corresponding parameters for you to call it easily. It uses the "watermark" API to implement real-time image and text watermarking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Pipeline operator&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The "|" pipeline operator of CI enables you to perform multiple processing tasks on an image in the sequence, such as scaling and watermarking.&lt;/p&gt;

&lt;p&gt;You can append a style separator "?" to the end of an image URL and then add processing operations separated by the pipeline operator "|". Then, these operations will be performed in sequence. Currently, up to three operations can be added. The sample here uses pipeline operators. As the input image is large and the logo is small, image scaling is performed first before the logo is added as a watermark.&lt;/p&gt;

&lt;p&gt;If you don't bother to splice the URL parameters by entering many strings, we've also got you covered by offering the image style feature. In the bucket list, select the target bucket. Then, on the "Image Processing" tab, add an image style. COS and CI together make a comprehensive image solution that satisfies different image processing needs of applications and websites. It not only provides convenient image services quickly, but also adjusts resources dynamically based on the elastic scheduling of Tencent Cloud capabilities to sustain sudden business changes.&lt;/p&gt;

&lt;p&gt;Come try out these powerful features and discover more.&lt;/p&gt;

&lt;p&gt;Read more at: &lt;a href="https://www.tencentcloud.com/dynamic/blogs/sample-article/100385"&gt;https://www.tencentcloud.com/dynamic/blogs/sample-article/100385&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cos</category>
      <category>watermark</category>
      <category>imagesolution</category>
      <category>cloudinfinite</category>
    </item>
    <item>
      <title>COS-Based ClickHouse Data Tiering Solution</title>
      <dc:creator>Man yin Mandy Wong</dc:creator>
      <pubDate>Tue, 08 Nov 2022 03:46:06 +0000</pubDate>
      <link>https://dev.to/tencentcloud/cos-based-clickhouse-data-tiering-solution-4gae</link>
      <guid>https://dev.to/tencentcloud/cos-based-clickhouse-data-tiering-solution-4gae</guid>
      <description>&lt;p&gt;ClickHouse is a columnar database management system (DBMS) for online analytical processing (OLAP) and supports interactive analysis of petabytes of data. As a distributed DBMS, it differs from other mainstream big data components in that it doesn't adopt the Hadoop Distributed File System (HDFS). Instead, it stores data in local disks of the server and uses data replicas to guarantee high data availability. Then, it leverages distributed tables to implement distributed data storage and query.&lt;/p&gt;

&lt;p&gt;Shard: It refers to a server that stores different parts of the data. In order to read all the data, you must access all the shards. Storing the data of distributed tables in multiple shards implements horizontal scaling of computing and storage.&lt;/p&gt;

&lt;p&gt;Replica: Each shard contains multiple data replicas, so you can access any replica to read data. The replica mechanism ensures data availability in case a single storage node fails. Only MergeTree table engines support the multi-replica architecture. ClickHouse implements the data replica feature in table engines rather than database engines; therefore, replicas are table-level rather than server-level. When data is inserted into ReplicatedMergeTree engine tables, primary-secondary sync is performed to generate multiple data replicas. ZooKeeper is used to conduct distributed coordination during the sync.&lt;/p&gt;

&lt;p&gt;Distributed table: Distributed tables created with distributed engines distribute query tasks among multiple servers for processing but don't store data. When such a table is created, ClickHouse will first create a local table in each shard, which will be visible only on the corresponding node; then, it will map the local tables to the distributed table. In this way, when you access the distributed table, ClickHouse will automatically forward your request to the corresponding local table based on the cluster's architecture information.&lt;/p&gt;

&lt;p&gt;In summary, one ClickHouse cluster consists of multiple shards, each of which contains multiple data replicas. A replica corresponds to a server node in the cluster and uses its local disk to store data. With distributed tables, shards, and replicas, ClickHouse achieves the horizontal scalability and high data availability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Tiered data storage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Starting from v19.15, ClickHouse supports multi-volume storage, which stores ClickHouse tables in volumes containing multiple devices. This feature makes it possible to define different types of disks in a volume for tiered storage of cold and hot data, striking a balance between performance and cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Disk types supported by ClickHouse&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ClickHouse mainly supports DiskLocal and DiskS3 disks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Data movement policy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ClickHouse can store data in different storage media by configuring disks of different types and storage policies in the configuration file. It also supports movement policies to automatically move data between storage media.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Current problems with data storage in ClickHouse&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many users choose ClickHouse for its superior query performance. To make the most of it, they generally select Tencent Cloud Enhanced SSD cloud disks to store ClickHouse data for their high performance; however, Enhanced SSD costs a lot. After a trade-off between the performance and cost, they may clear legacy data from ClickHouse. Although most queries involve the latest data, the business side does need to access legacy data sometimes. The balance between the cost and occasional access to legacy data bothers ClickHouse system admins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. COS strengths&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cloud Object Storage (COS) is a distributed storage service launched by Tencent Cloud. It has no directory hierarchy or data format restrictions, can accommodate an unlimited amount of data, and supports access over HTTP/HTTPS protocols.&lt;/p&gt;

&lt;p&gt;COS organizes data in pay-as-you-go buckets with an unlimited capacity, which can be used and scaled on demand.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. COS-based ClickHouse data tiering&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Prepare the following environments before configuring data tiering:&lt;/p&gt;

&lt;p&gt;• Local storage: Format an Enhanced SSD cloud disk and mount it to the "/data" path for storing hot data.&lt;/p&gt;

&lt;p&gt;• COS bucket: Create a COS bucket for storing cold data and get the "SecretId" and "SecretKey" of the account that can access the bucket.&lt;/p&gt;

&lt;p&gt;6.1 Configure the ClickHouse disk and policy&lt;/p&gt;

&lt;p&gt;First, you need to configure the "/etc/clickhouse-server/config.d/storage.xml" file. In , define the local disk path, COS bucket URL, and "SecretId" and "SecretKey" of the access account. In , define the  policy, which defines  and  volumes that contain the local disk and COS bucket respectively.&lt;/p&gt;

&lt;p&gt;6.2 Import data to ClickHouse&lt;/p&gt;

&lt;p&gt;After completing the storage configuration, set up a table with the TTL policy configured and import data to it to verify the tiering policy.&lt;/p&gt;

&lt;p&gt;Here, a COS bucket inventory is selected as the data source for import. First, create a table named "cos_inventory_ttl" in ClickHouse based on the content of each column in the inventory. Then, configure the TTL policy. According to the "LastModifiedDate" value, store hot data in the "ttlhot" volume and cold data at least three months old in "ttlcold".&lt;/p&gt;

&lt;p&gt;6.3 Verify data&lt;/p&gt;

&lt;p&gt;After import, view the total number of data rows. Then, you can query the volumes storing different data. You can further conduct a query test to count the total size of files generated in the past three months in the "cos-user/" directory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In ClickHouse, configuring different storage media and policies implements automatic tiered storage of data. Thanks to the unlimited capacity and cost-effectiveness of COS, ClickHouse clusters can store data in the long term at low costs while providing a superior query performance.&lt;/p&gt;

&lt;p&gt;Read more at: &lt;a href="https://www.tencentcloud.com/dynamic/blogs/sample-article/100384"&gt;https://www.tencentcloud.com/dynamic/blogs/sample-article/100384&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cos</category>
      <category>clickhouse</category>
      <category>database</category>
      <category>olap</category>
    </item>
    <item>
      <title>COS Cost Optimization Solution</title>
      <dc:creator>Man yin Mandy Wong</dc:creator>
      <pubDate>Thu, 03 Nov 2022 03:22:19 +0000</pubDate>
      <link>https://dev.to/mandywong720/cos-cost-optimization-solution-2h09</link>
      <guid>https://dev.to/mandywong720/cos-cost-optimization-solution-2h09</guid>
      <description>&lt;p&gt;As more and more enterprises move their businesses to the cloud, they are increasingly aware of cloud costs. Business development brings massive storage needs in the cloud. How to optimize costs to ease the burden on the business?&lt;/p&gt;

&lt;p&gt;Before going any further into cost optimization, let's make it clear how Tencent Cloud’s Cloud Object Storage (COS) is billed first. Specifically, storage, traffic, request, data retrieval, and management fees are charged when you use COS, where the first two billable items account for the majority of costs. The following describes the COS cost optimization solution from five aspects:&lt;/p&gt;

&lt;p&gt;1.Selecting a proper storage class&lt;br&gt;
2.Regularly analyzing the data access mode through inventory and access log features&lt;br&gt;
3.Transitioning data through lifecycle and batch operations&lt;br&gt;
4.Reducing storage size through file compression&lt;br&gt;
5.Reviewing costs&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;•Selecting a proper storage class&lt;/strong&gt;&lt;br&gt;
Selecting a storage class suitable for your business greatly optimizes storage costs. COS provides multiple storage classes to meet different requirements for performance, data durability, and business availability. The STANDARD storage class is costly but promises the lowest read latency, while STANDARD_IA, ARCHIVE, and DEEP ARCHIVE are more economical but incur data retrieval fees during download.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;•Regularly analyzing the data access mode through inventory and access log features&lt;/strong&gt;&lt;br&gt;
Analyzing the data access mode helps make informed decisions about storage class selection. COS offers inventory and access log features for recording file metadata and access requests respectively, which are then stored in your buckets. COS also provides COS Select capabilities to search for file content. If you have generated too many inventory files or log records, you can also purchase an EMR cluster to set up the Presto component for data analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;•Transitioning data through lifecycle and batch operations&lt;/strong&gt;&lt;br&gt;
The data access mode changes dynamically as the business develops. Most data records are accessed less frequently as they grow old. Therefore, it is advisable to adjust the data storage class based on the access frequency for cost optimization. COS has lifecycle capabilities to help you change the storage class regularly. In particular, you can leverage inventory and access log features to analyze the data access mode and then create lifecycle transition rules accordingly.&lt;/p&gt;

&lt;p&gt;For certain businesses, it is enough to transition batch files to a colder storage class at one time without specified rules (such as prefixes or tags). This is where COSBatch comes into play. You can use its batch copy feature to change the data storage class or add object tags to set lifecycle rules for object deletion. Below are detailed directions:&lt;/p&gt;

&lt;p&gt;1.Export the list of files to be processed and convert it into the CSV format.&lt;br&gt;
2.Create a COSBatch task and import the file list.&lt;br&gt;
3.Execute the task and wait for it to complete.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;•Reducing storage size through file compression&lt;/strong&gt;&lt;br&gt;
COS allows you to compress image data to reduce the storage size and costs. Currently, the following compression formats are supported:  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Guetzli compression&lt;/strong&gt;: It is visually lossless. By taking advantage of human eyes' insensitivity to specific color gamuts and details, it discards specific details to reduce the image size by 35–50% without changing the quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.TPG compression&lt;/strong&gt;: It is designed by Tencent to compress JPG, PNG, GIF, and WebP images with a compression ratio of over 35%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.HEIF compression&lt;/strong&gt;: It compresses JPG, PNG, GIF, and WebP images on iOS with a compression ratio of over 45%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;•Reviewing costs&lt;/strong&gt;&lt;br&gt;
Cost optimization needs to be incorporated into the entire business process. Besides planning costs during cloudification, you also need to review costs from time to time afterwards. Planning a proper cloud storage architecture helps reduce storage costs. In addition, you can download your bills at the Tencent Cloud Billing Center to view and analyze your cloud storage usage details before targeted optimization.&lt;/p&gt;

&lt;p&gt;COS has always been oriented towards data storage performance and security, service cost-effectiveness, cost reduction, and efficiency improvement. It will continue to create and polish more storage products and scenario-specific solutions with the most economical storage services in the industry at large.&lt;/p&gt;

&lt;p&gt;Read more at:&lt;a href="https://www.tencentcloud.com/dynamic/blogs/sample-article/100380"&gt;https://www.tencentcloud.com/dynamic/blogs/sample-article/100380&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cos</category>
      <category>reducecost</category>
      <category>storage</category>
      <category>compression</category>
    </item>
    <item>
      <title>Tencent Cloud COS, Key to Data Disaster Recover</title>
      <dc:creator>Man yin Mandy Wong</dc:creator>
      <pubDate>Tue, 01 Nov 2022 03:13:22 +0000</pubDate>
      <link>https://dev.to/tencentcloud/tencent-cloud-cos-key-to-data-disaster-recover-2pch</link>
      <guid>https://dev.to/tencentcloud/tencent-cloud-cos-key-to-data-disaster-recover-2pch</guid>
      <description>&lt;p&gt;This article describes how Tencent Cloud Object Storage (COS) addresses data layer disaster recovery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Cross-AZ Disaster Recovery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If your application is already deployed in Tencent Cloud, you can use COS's multi-AZ capabilities to improve the data layer availability. Multi-AZ refers to the multi-AZ storage architecture offered by COS, which can provide IDC-level disaster recovery capabilities for your data.&lt;/p&gt;

&lt;p&gt;In this architecture, data will be split into multiple chunks, and corresponding coding chunks will be calculated based on the erasure code algorithm. The original data chunks and coding chunks will be mixed up and evenly distributed to IDCs in different AZs in a region for storage and intra-region disaster recovery.&lt;/p&gt;

&lt;p&gt;The multi-AZ feature provides 99.9999999999% (12 nines) designed data reliability and 99.995% designed service availability. When you upload data objects to COS, you can store them in a multi-AZ region simply by specifying the storage class.&lt;/p&gt;

&lt;p&gt;After the multi-AZ feature is enabled, your data will be distributed among IDCs in multiple AZs in a region. When an IDC fails due to extreme situations such as natural disasters or power outages, other IDCs can still guarantee normal data reads and writes, thereby ensuring persistent storage, business continuity, and high availability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Cross-Region Disaster Recovery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In addition to the multi-AZ feature of COS, you can also save data copies in different regions to further improve the data layer availability.&lt;/p&gt;

&lt;p&gt;COS's cross-region bucket replication feature asynchronously replicates data across regions. It is a bucket-level configuration item, where rules can be configured to replicate incremental objects from one bucket to another bucket automatically and asynchronously.&lt;/p&gt;

&lt;p&gt;With cross-bucket replication, COS can accurately replicate exactly the same object content, along with object metadata and version IDs, from the source bucket to the destination bucket. Additionally, object operations such as adding or deleting objects can also be synced to the destination bucket.&lt;/p&gt;

&lt;p&gt;With cross-region bucket replication, when the IDC in one region is damaged due to force majeure, the IDC in another region can still provide data copies for your use, implementing cross-region disaster recovery.&lt;/p&gt;

&lt;p&gt;In addition to high availability, cross-region bucket replication can also meet industry-specific requirements for data compliance. If you have end users accessing objects from different regions, you can maintain object copies in buckets closest to them geographically, so as to minimize the access latency and deliver a better user experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Versioning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If data is deleted accidentally, it will be lost permanently even if cross-AZ or cross-region disaster recovery is implemented.&lt;/p&gt;

&lt;p&gt;To avoid data loss due to accidental deletion or application failure, COS has launched the versioning feature. It allows you to store multiple versions of an object in the same bucket. For example, you can store multiple objects with the same object key "picture.jpg" but different version IDs like "100000", "100101", and "120002" in a bucket. Then, you can query, delete, or restore objects in the bucket by version ID. This enables you to recover from data loss caused by accidental deletion or application failure. For example, when you delete an object with versioning enabled:&lt;/p&gt;

&lt;p&gt;• If you need to delete the object (not permanently), COS will insert a delete marker for the deleted object. The marker will serve as the current object version and can be used for version restoration.&lt;/p&gt;

&lt;p&gt;• If you need to replace the object, COS will insert a new version ID for the newly uploaded object. You can still restore the replaced object with the version ID.&lt;/p&gt;

&lt;p&gt;There are three versioning states for a bucket:&lt;/p&gt;

&lt;p&gt;• Versioning not enabled: Bucket versioning is not enabled by default.&lt;/p&gt;

&lt;p&gt;• Versioning enabled: When bucket versioning is enabled, it will be applied to all the objects in the bucket. After versioning is enabled for the first time, new objects uploaded to the bucket will be assigned a unique version ID.&lt;/p&gt;

&lt;p&gt;• Versioning suspended: After versioning is suspended (it cannot be disabled once enabled), new objects uploaded to the bucket will no longer be subject to versioning.&lt;/p&gt;

&lt;p&gt;You can upload, query, and delete objects no matter which versioning state the bucket is in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Anti-Overwrite for Upload&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Besides force majeure, data exceptions may also occur due to operations that don't seem risky. COS maintains the eventual consistency by overwriting an existing file when another file with the same name is uploaded. To avoid unexpected overwrites, you need to maintain a complete name check system in your business logic. Alternatively, you can enable versioning, which leads to a more complex logic for object management and extra storage usage though. More often than not, you only need to forbid overwrites of certain files, which makes versioning unnecessary in terms of functionality.&lt;/p&gt;

&lt;p&gt;To this end, COS provides an anti-overwrite mechanism at bucket and object levels. You can enable bucket-level anti-overwrite, then the bucket will forbid uploads of any files with the same name. Specifically, when a file with the same name is uploaded, COS will deny the upload request to ensure that the existing file in the bucket will not be overwritten. If you only want to prevent certain files in the bucket from being overwritten, you can add a special header to the upload request, which checks whether any file in the bucket has the same name as the file to be uploaded, and if so, the upload will fail. After anti-overwrite is enabled, you can still rename or delete files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Object Lock&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In some compliance scenarios, anti-overwrite is far from enough though. For example, in the finance field, compliance regulations require file retention for a certain period of time and prohibit file overwrite, deletion, and modification. In this case, you can use object lock to meet the requirements. After it is enabled, within the retention period:&lt;/p&gt;

&lt;p&gt;1.Objects cannot be deleted or modified;&lt;/p&gt;

&lt;p&gt;2.The storage class of objects cannot be modified;&lt;/p&gt;

&lt;p&gt;3.The HTTP headers and user metadata of objects cannot be modified, including "Content-Type", "Content-Encoding", "Content-Language", "Content-Disposition", "Cache-Control", "Expires", and "x-cos-meta-".&lt;/p&gt;

&lt;p&gt;Object lock perfectly meets compliance requirements.&lt;/p&gt;

&lt;p&gt;Compared with local secondary IDCs, cloud-based disaster recovery features higher reliability, availability, and security and gets rids of repeated hardware, computing, networking, and software. It greatly reduces the TCO while guaranteeing the RPO and RTO.&lt;/p&gt;

&lt;p&gt;Read more at: &lt;a href="https://www.tencentcloud.com/dynamic/blogs/sample-article/100379"&gt;https://www.tencentcloud.com/dynamic/blogs/sample-article/100379&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cos</category>
      <category>objectstorage</category>
      <category>datarecovery</category>
      <category>databackup</category>
    </item>
    <item>
      <title>ARM-Based Server Review</title>
      <dc:creator>Man yin Mandy Wong</dc:creator>
      <pubDate>Thu, 27 Oct 2022 04:06:25 +0000</pubDate>
      <link>https://dev.to/tencentcloud/arm-based-server-review-188k</link>
      <guid>https://dev.to/tencentcloud/arm-based-server-review-188k</guid>
      <description>&lt;p&gt;&lt;strong&gt;1. Background&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Take a look at the ARM-based server SR1 recently launched by Tencent Cloud. Is it worth it? How does it stack up against other models? Let's check it out.&lt;/p&gt;

&lt;p&gt;We have reviewed two typical models of the ARM-based SR1 and x86-based S5 to show you how to measure CPU performance, mainly computing power, so that you can quickly know what you should be looking for.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. ARM-based server environment and evaluation preparations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tencent Cloud SR1 is the first ARM-based server with the latest Ampere Altra, an ARM Neoverse N1 CPU with up to 2.8 GHz clock rate and 64 KiB L1 cache. The Neoverse N1 CPU has the following architecture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D_-0HxCM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p63el6hf8dlblu0ypthq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D_-0HxCM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p63el6hf8dlblu0ypthq.png" alt="Image description" width="865" height="943"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The other object is the mainstream x86-based standard S5, which adopts the latest Cooper Lake microarchitecture of Intel Xeon Platinum and runs at 2.5 GHz. It's quite popular in general use cases. By the way, both of the test objects accommodate 4-core 8 GiB memory.&lt;/p&gt;

&lt;p&gt;From the cost perspective, &lt;strong&gt;SR1 is approximately 20% cheaper than S5&lt;/strong&gt; as indicated at the official website. Although it doesn't have a price as competitive as Lighthouse, it is really worth it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FJiN5sCu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7qpp26ssrdo6nr9f71oc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FJiN5sCu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7qpp26ssrdo6nr9f71oc.png" alt="Image description" width="683" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;1.1 ARM-based server activation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;S5 and SR1 price comparison&lt;/p&gt;

&lt;p&gt;SR1 is comparable to S5 in terms of overall performance and more economical than the latter, a must-have that promises a large amount of cost savings for both individuals and enterprises.&lt;/p&gt;

&lt;p&gt;Tips: Screen splitting&lt;/p&gt;

&lt;p&gt;Use the Tmux tool to split the screen (ctrl b), log in to two servers at the same time, and enter the &lt;code&gt;ctrl b:setw synchronize-panes&lt;/code&gt; command to allow for entering commands on two terminals at the same time, as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ih5QOZDN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3exvdy8un1qmntynh12b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ih5QOZDN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3exvdy8un1qmntynh12b.png" alt="Image description" width="828" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2.1 System preparations and CPU viewing&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Enter commands in different windows of Tmux.&lt;/p&gt;

&lt;p&gt;Done with the preparations and let's start the evaluation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. 7-Zip compression evaluation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;7-Zip is built with the LZMA compression tool to quickly evaluate the CPU computing performance of servers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JvCwCmsV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ybto8rsotreca5n27sqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JvCwCmsV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ybto8rsotreca5n27sqw.png" alt="Image description" width="809" height="73"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the following command to evaluate the performance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gCLNiMxQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1vbx8cs2ciucvsaf186q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gCLNiMxQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1vbx8cs2ciucvsaf186q.png" alt="Image description" width="833" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2.6 LZMA compression evaluation (ARM-based SR1/x86-based S5)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;7-Zip evaluation&lt;/p&gt;

&lt;p&gt;The 7-Zip benchmark command can be used to display the compression and decompression performance of a server, with a measure of million instructions per second (MIPS). The higher the value, the stronger the performance. You can also use metrics such as compression rate and execution time for coordinated verification. 7-Zip evaluation rarely uses 64-bit instructions, let alone advanced sets; it's more about the performance of CPU "fundamentals". LZMA compression performance relies on the memory access latency, high-speed data cache (D-Cache) capacity, TLB performance, and out-of-order execution efficiency of a CPU; while the decompression performance reveals more about the branch prediction and instruction latency of the multi-stage pipeline design.&lt;/p&gt;

&lt;p&gt;Evaluation results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6g7hIx5K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p53aqtg0xl1b1418sfu8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6g7hIx5K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p53aqtg0xl1b1418sfu8.png" alt="Image description" width="647" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2.2 LZMA compression evaluation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;7-Zip evaluation of S5 and SR1&lt;/p&gt;

&lt;p&gt;As you can see, &lt;strong&gt;ARM-based SR1 delivers 60% higher performance than x86-based S5 in LZMA compression and decompression scenarios.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. LUKS block device encryption and decryption evaluation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;LUKS is a specification for block device encryption supported by the Linux kernel. Simply put, it encrypts disks.&lt;/p&gt;

&lt;p&gt;Similar to file compression and decompression, block device encryption and decryption are typical applications that consume a lot of computing resources. Unlike generic computing scenarios, encryption and decryption computing instructions are usually implemented with special hardware to serve as CPU extension sets. The x86 system adopts the AES-NI extension, and ARM differentiates extensions for varied encryption and decryption scenarios.&lt;/p&gt;

&lt;p&gt;There is no need to install any other software. Just use the cryptsetup tool that comes with Linux to evaluate the CPU performance through encryption and decryption algorithms:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2StH2Zz5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u47ow4do23377q6qojdg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2StH2Zz5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u47ow4do23377q6qojdg.png" alt="Image description" width="865" height="41"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By default, the command evaluates tasks of ciphers and key derivation functions (KDFs).&lt;/p&gt;

&lt;p&gt;Run the following command to evaluate the performance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---9hHP7YN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8q3qvea6qbavx961y2k2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---9hHP7YN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8q3qvea6qbavx961y2k2.png" alt="Image description" width="833" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2.3 LUKS encryption evaluation (ARM-based SR1/x86-based S5)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;LUKS evaluation process&lt;/p&gt;

&lt;p&gt;Evaluation results (KDFs):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZDmOgY85--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/38so7xyw39paz2l4wiif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZDmOgY85--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/38so7xyw39paz2l4wiif.png" alt="Image description" width="639" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2.3 LUKS encryption evaluation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;LUKS evaluation of S5 and SR1 in terms of KDFs&lt;/p&gt;

&lt;p&gt;Evaluation results (ciphers):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4l3ET2PT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rs8q4hhdeqvyxrppurp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4l3ET2PT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rs8q4hhdeqvyxrppurp2.png" alt="Image description" width="644" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2.3 LUKS encryption evaluation (ARM-based SR1/x86-based S5)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;LUKS evaluation of S5 and SR1 in terms of encryption algorithms&lt;/p&gt;

&lt;p&gt;As you can see, &lt;strong&gt;the ARM-based server outperforms its x86-based counterpart in terms of the optimization of common SHA instructions (SHA-256 and SHA-512) and AES-CBC encryption; while in terms of decryption and XTS encryption with the highest security, the x86-based server (AES-NI extension instruction) does a better job.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. OpenSSL network encryption and decryption evaluation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Block device encryption uses data at rest, while network encryption involves data in transit. As OpenSSL is one of the most popular network encryption libraries, it's necessary to conduct an OpenSSL performance evaluation.&lt;/p&gt;

&lt;p&gt;OpenSSL's speed sub-command can be used to evaluate all the encryption algorithms, which takes a long time. Generally speaking, you can use parameters to specify algorithms. Commonly used algorithms are Hash-based Message Authentication Code (HMAC) for encrypted information integrity and identity verification, SHA-256 secure hash for information digest and digital signature, and standard encryption algorithm of AES-256 widely adopted by cloud service providers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gwldp-yM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rsocg2mfaasp5s8nvgvb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gwldp-yM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rsocg2mfaasp5s8nvgvb.png" alt="Image description" width="865" height="51"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the following command to evaluate the performance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UczlFnqK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4fpqv9aswyu8ozpqldm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UczlFnqK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4fpqv9aswyu8ozpqldm.png" alt="Image description" width="830" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2.4 OpenSSL encryption evaluation (ARM-based SR1/x86-based S5)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;OpenSSL encryption process through speed&lt;/p&gt;

&lt;p&gt;Evaluation results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hBtxNyk0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ofqqwozzvnemnf6f9wch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hBtxNyk0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ofqqwozzvnemnf6f9wch.png" alt="Image description" width="622" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2.4 OpenSSL encryption evaluation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;OpenSSL encryption results of S5 and SR1&lt;/p&gt;

&lt;p&gt;As you can see, &lt;strong&gt;the ARM-based server slightly lags behind the x86-based server in terms of MD5 HMAC, but it outperforms the latter in terms of SHA-256 and AES-256, especially in the former case.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Redis database throughput rate evaluation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now let's move to Redis performance evaluation. As one of the most popular memory databases, Redis is often used for key-value storage, data cache, and message queue scenarios with a high throughput rate. Redis also has a built-in evaluation utility called redis-benchmark to measure the number of requests per second.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Zet-2FiR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xvyz9amu3sqcv749m64s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Zet-2FiR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xvyz9amu3sqcv749m64s.png" alt="Image description" width="865" height="77"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The redis-benchmark program evaluates the throughput rate of a single server during the tests of GET, SET, LPUSH, and other common Redis commands, looking into the CPU and its memory access capabilities (such as memory access bandwidth and performance).&lt;/p&gt;

&lt;p&gt;Run the following command to evaluate the performance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MFhoueW6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xx3shiaytrbxr9w17npo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MFhoueW6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xx3shiaytrbxr9w17npo.png" alt="Image description" width="830" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2.6 Throughput evaluation (ARM-based SR1/x86-based S5)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Redis evaluation command execution&lt;/p&gt;

&lt;p&gt;Evaluation results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rl4LZfiM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q9r5qw9e2su9mchqdukx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rl4LZfiM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q9r5qw9e2su9mchqdukx.png" alt="Image description" width="659" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2.6 Throughput evaluation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Redis throughput rate evaluation of S5 and SR1&lt;/p&gt;

&lt;p&gt;According to the Redis evaluation results, &lt;strong&gt;ARM-based SR1 has 30% to 40% higher performance on average than x86-based S5.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now it's time you get some hands-on experience and see what your cloud server performance test would reveal.&lt;/p&gt;

&lt;p&gt;Actually, ARM-based servers are more than cost-effective. As ARM platform-based virtualization technologies become popularized in the cloud, ARM-based servers are bound to gain more momentum in IoT, cloud phone/gaming, Android ecosystem, and many more use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's look forward to more diversified experiences available at our fingertips.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>arm</category>
      <category>virtualtech</category>
      <category>tutorial</category>
      <category>cloud</category>
    </item>
    <item>
      <title>GME Immersive Voice Solution Empowers Games with Boundless Imagination of Metaverse</title>
      <dc:creator>Man yin Mandy Wong</dc:creator>
      <pubDate>Tue, 25 Oct 2022 03:13:37 +0000</pubDate>
      <link>https://dev.to/tencentcloud/gme-immersive-voice-solution-empowers-games-with-boundless-imagination-of-metaverse-1fjl</link>
      <guid>https://dev.to/tencentcloud/gme-immersive-voice-solution-empowers-games-with-boundless-imagination-of-metaverse-1fjl</guid>
      <description>&lt;p&gt;&lt;strong&gt;1.What possibilities can metaverse bring to games?&lt;/strong&gt;&lt;br&gt;
The trending "metaverse" concept was first coined in an American science fiction to refer to a cyberspace parallel to the reality. Games are the closest form of metaverse. From mainstream perspectives, metaverse games deliver a real and immersive interactive and social networking experience by allowing players to interact, create, and exchange value freely and boasting diverse and inclusive cultures and content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.What challenges in voice technologies need to be tackled to implement metaverse features in games?&lt;/strong&gt;&lt;br&gt;
Metaverse games have high requirements for an interactive experience and need to tackle the following core challenges to implement voice technologies: sense of direction, immersive experience, cross-platform compatibility, and barrier-free multilingual communication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;•Sense of Voice Direction&lt;/strong&gt;&lt;br&gt;
In interaction-intensive social gaming, the most important interaction method is game voice. When people are talking in the real world, the voice direction and distance also convey a large amount of information in addition to the volume level and tone. How to enable players to communicate like in the real world and how to convey the directional information in the game voice are top priorities for developers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;•Immersive Voice Experience&lt;/strong&gt;&lt;br&gt;
In addition to the voice direction and distance, the voice of people in the real world also integrates with the environment. When people are talking, they can perceive effects such as reverb and diffraction of their voice generated in the environment. How to integrate the voice with the environment to maximize the real immersive experience for players is also a major challenge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;•Cross-Platform Compatibility&lt;/strong&gt;&lt;br&gt;
Players log in to a game from different terminals and devices. How to implement smooth game voice, make the game compatible with tens of thousands of device models available on the market, and enable players on game consoles, mobile devices, and PCs to talk with one another are major challenges for developers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;•Barrier-Free Communication&lt;/strong&gt;&lt;br&gt;
Metaverse games allow players from different cultures and languages to have fun in an open metaverse and even switch their accents like Millie in Free Guy. To helpHelping players speaking different languages communicate without barriers creates higher requirements for games.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.GME Empowers Games with Boundless Imagination of Metaverse&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;GME 3D Position-Based Voice&lt;/strong&gt;&lt;br&gt;
3D voice conveys direction and position information to make the voice more stereo. In battle royale and FPS games with an ever-changing battle situation, voice-based position identification greatly improves players' communication efficiency during multi-player team battles. In social games such as Werewolf, the sense of voice direction gives players a more truly interactive experience and enhances their memory even in roundtable discussions with strangers.&lt;/p&gt;

&lt;p&gt;By adopting HRTF and distance-based equalization technologies, GME's unique realistic 3D sound effect can completely restore the position details of voice and virtualize the auditory perception of the sound source in any position in a space. This enables players to identify teammates' positions in game battles based on their voice and enjoy an immersive gaming experience.&lt;/p&gt;

&lt;p&gt;The 3D position-based sound effect is also available for different types of games, including MOBA, FPS, ARPG, Werewolf, space Werewolf, and board game.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Immersive Experience: Real-Time Sound Effect Processing by Wwise + GME&lt;/strong&gt;&lt;br&gt;
How to integrate the voice and game environment has always been a challenge for game audio engineers. In traditional mobile game voice solutions, audio engineers usually have to give up carefully crafted background sound effects due to the poor audio quality of players' mic.&lt;/p&gt;

&lt;p&gt;GME has developed a proprietary solution jointly with the industry-leading sound effect engine Wwise, which well integrates the player voice with the pipeline design of game sound effects, fundamentally solving problems occurring during volume type switch in traditional voice solutions, such as volume level jump and audio quality reduction.&lt;/p&gt;

&lt;p&gt;Moreover, based on powerful audio processing capabilities and rich sound effect plugins of Wwise, GME can implement sound effects such as reverb, diffraction, and insulation that are perfectly integrated with the game scenes in captured voice chat streams, which not only makes voice gameplay features more diverse, but also makes player communication more immersive.&lt;/p&gt;

&lt;p&gt;In addition to perfect integration with the environment sound effect, the Wwise + GME solution also allows you to customize the processing of each voice stream, leaving you more room for designing diverse voice gameplay features. For example, you can design special sound effects based on the character of players and the changes in their status in game scenes, for example, using a quaver to express pain after being hit by the enemy.&lt;/p&gt;

&lt;p&gt;As the unique global official voice partner of Wwise, GME is perfectly compatible and easy to connect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fun: GME Voice Changing Effects&lt;/strong&gt;&lt;br&gt;
GME also provides the voice changingvoice-changing feature for voice chat. In game voice interaction, players can freely switch between dozens of sound effects like from a middle-aged man to a little girl or from a cute girl to a nerd, so as to add more personality to their characters and make chat more amusing. In the metaverse, players are no longer constrained by their real-world identity and can switch their tone and personality at any time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Powerful Cross-Terminal Compatibilities of GME&lt;/strong&gt;&lt;br&gt;
As the only Chinese voice development tool that makes the list of third-party development tools and middleware for Nintendo Switch™, PlayStation®️4, and PlayStation®️5, GME provides SDKs for consoles and is compatible with the latest versions of all console platforms. It features deep optimizations for UE, Unity, Cocos, and other major game engines, supports macOS, Windows, iOS, and Android systems, and is adapted to 20,000+ device models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Barrier-Free Multilingual Communication&lt;/strong&gt;&lt;br&gt;
GME helps you easily implement multilingual communication scenarios. It can convert voice messages and voice chat streams to text in up to 125 languages, eliminating the language barriers in communication. It returns high-accuracy recognition results at a low latency to help implement barrier-free communication across regions and cultures in games.&lt;/p&gt;

&lt;p&gt;Metaverse is not only a popular concept in the investment and technology fields, but also a long-term vision in the game industry. GME brings a brand new interactive voice experience to game developers and aims to continuously explore more possibilities of metaverse jointly with all industries.&lt;/p&gt;

&lt;p&gt;Read more at: &lt;a href="https://www.tencentcloud.com/dynamic/blogs/sample-article/100373"&gt;https://www.tencentcloud.com/dynamic/blogs/sample-article/100373&lt;/a&gt;&lt;/p&gt;

</description>
      <category>metaverse</category>
      <category>gme</category>
      <category>voicesolution</category>
      <category>3dvoice</category>
    </item>
    <item>
      <title>Application of Media Processing Technology to 4K/8K FHD Video Processing</title>
      <dc:creator>Man yin Mandy Wong</dc:creator>
      <pubDate>Thu, 20 Oct 2022 02:39:46 +0000</pubDate>
      <link>https://dev.to/tencentcloud/application-of-media-processing-technology-to-4k8k-fhd-video-processing-4blh</link>
      <guid>https://dev.to/tencentcloud/application-of-media-processing-technology-to-4k8k-fhd-video-processing-4blh</guid>
      <description>&lt;p&gt;The support for higher video resolutions and definitions on devices has created higher demand for high definition and brought many challenges for 4K/8K videos with a super high resolution and bitrate. Today, we'll share some ideas about accelerating media digitalization through media processing capabilities.&lt;/p&gt;

&lt;p&gt;In part 1, we will talk about the features of 4K/8K FHD videos and the problems holding back their wide application. Part 2 details the optimizations we've performed on encoders to make them more adapted to videos with a super high bitrate and resolution. Part 3 focuses on the architecture of the real-time 8K transcoding system for live streaming scenarios. And in the last part, we cover how to leverage media processing capabilities and image quality remastering technology to increase definition so that more FHD videos are available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MemSwBm5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lfzt66xh605n5l92ulem.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MemSwBm5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lfzt66xh605n5l92ulem.png" alt="Image description" width="880" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---svHsp8b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6gbwlm1uc7w4qkw2jmog.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---svHsp8b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6gbwlm1uc7w4qkw2jmog.png" alt="Image description" width="880" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4K/8K FHD videos feature a super high definition, resolution, and bitrate. The latter two pose new challenges to downstream systems. In a live streaming system, video resolution and bitrate are closely related to the processing speed and performance consumption during transcoding. To support the real-time 8K transcoding system, both the encoding kernel and system architecture need to be redesigned. Currently, there are many hardware solutions dedicated to real-time 4K/8K encoding, but these solutions suffer from a poor compression rate compared with software encoding. To deliver 4K/8K definition, they require dozens of or even hundreds of megabytes for bitrate, posing a huge challenge to the entire transfer linkage and to the playback device. In addition, AR and VR are gaining momentum, which rely heavily on video encoding and transfer. As technologies advance, FHD videos will be an inevitable trend.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q5WmoKxv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/guxkt07a1s5e5esvb3zd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q5WmoKxv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/guxkt07a1s5e5esvb3zd.png" alt="Image description" width="880" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second part shares some encoding optimizations and the performance delivered by our proprietary encoders. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GedUmrkg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iphq2e30dbduchyajfs4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GedUmrkg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iphq2e30dbduchyajfs4.png" alt="Image description" width="880" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our team has independently developed encoding kernels for H.264, H.265, AV1, and the latest H.266. Proprietary encoders make it possible to design encoding features for real-world business scenarios and perform targeted optimizations. For example, during the Beijing Winter Olympics, the Tencent Cloud live streaming system sustained real-time 4K/8K encoding and compression and supported up to 120 fps for real-time encoding. To ensure real-timeness, many custom optimizations were made inside the encoder. V265, Tencent's proprietary H.265 encoder, overshadows the open-source X265 in terms of speed and compression rate. At the highest speed level, V265 is significantly faster than X265, delivering quick encoding at a high resolution. V265 also supports 8K/10-bit/HDR encoding. AV1 encoding is much more complicated than H.265 encoding. For FHD implementations, we've made many optimizations in engineering performance. Compared with the open-source SVT-AV1, TSC delivers 55% performance acceleration and 16.8% compression gain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PyQtTLHF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1m3s9jw7v9bhjs8a90eh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PyQtTLHF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1m3s9jw7v9bhjs8a90eh.png" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To implement quick encoding of FHD videos, we have made a few optimizations. The first is to increase the parallelism. The encoding process involves parallelism at the frame and macroblock levels. In real-time encoding at a high resolution, the frame architecture of the video sequence is tuned to increase inter-frame encoding parallelism. As for macroblock-level parallelism, tile encoding is supported for better row-level encoding parallelism. The second one relates to pre-analysis and post-processing. Encoders always involve a lookahead pre-analysis process before subsequent encoding operations. The look-ahead process tends to affect the parallelism of the entire linkage. Therefore, algorithms for pre-analysis and post-processing are simplified to accelerate the process. After these optimizations, the encoder delivers a faster processing speed and a higher level of parallelism.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tcxcAMx8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oxk0nx0st9ocyfo7q97h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tcxcAMx8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oxk0nx0st9ocyfo7q97h.png" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Part 3 describes system architecture optimization. For live streaming scenarios, encoding kernel optimization alone is not enough to accommodate real-time 8K encoding and compression rate, which means the architecture of the entire system needs to be adjusted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t9EeRk-9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8zu21m9c81omzg1ksk0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t9EeRk-9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8zu21m9c81omzg1ksk0h.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is common practice to input the 8K AVS3 video source to the hardware encoder and output different channels of bitrate streams for delivery, such as 8K H.265, 4K H.265, 1080p H.264, and 720p H.264. This can help achieve the goal, but it also has many problems. First of all, 8K hardware encoders are generally expensive, especially 8K/AV1 ones with fewer options. Second, hardware encoders have a poor compression rate compared with optimized software encoders, as many acceleration algorithms not applicable to parallelism cannot be used for hardware encoding features. Third, hardware encoders often have custom architectures and chips, making them unable to quickly respond to different business scenarios. It's hard for hardware encoders to meet constantly evolving business requirements. If the same encoding effect can be achieved by software encoding, both the transcoding compression rate and business flexibility can be guaranteed. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BWuxs51S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rhvidqb6k3r0gzdg52tx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BWuxs51S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rhvidqb6k3r0gzdg52tx.png" alt="Image description" width="880" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To solve these problems, many adjustments are made to the architecture of the entire live streaming system. In a general live streaming system, streams are pushed to the upload access gateway, processed, transcoded, and then pushed to CDN for delivery and watching. For 8K video encoding, it's difficult for the current live stream processing linkage with only one server and one transcoding node to implement real-time software encoding. Against this backdrop, we've designed the FHD live stream processing platform.&lt;/p&gt;

&lt;p&gt;In FHD live streaming, a transcoding node performs remuxing instead of transcoding, that is, it splits a pulled source stream into TS segments and sends them as files to the video transcoding processing cluster. The cluster can process TS segments in parallel to implement parallel encoding of multiple servers. Compared with the original single-linkage encoding with one server, this distributed method on multiple servers features pure software control and high flexibility. It's quite convenient for processing both capacity expansion and business upgrades. In addition, costs are reduced. The hybrid deployment of the offline transcoding and live streaming clusters allows for resource reuse within a larger scope of business, increasing the resource utilization. There are shortcomings, of course. The latency will be higher than that in a standard transcoding process. To implement parallel transcoding, remuxing is performed before stream processing, during which independent TS segments are generated after a period of wait time, thus leading to a higher but acceptable latency. When HLS is used by the downstream services for live streaming, there won't be an obvious change in the latency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SbJ-VXU9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/en2azl6knuags9u89mr1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SbJ-VXU9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/en2azl6knuags9u89mr1.png" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Live 4K/8K FHD videos are converted into parallel and independent offline transcoding tasks by the offline processing cluster through parallel encoding. Top Speed Codec capabilities can be used within the offline transcoding node, where when transcoding is performed, the bandwidth can be saved by more than 50% at the same subjective quality. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--41F7gIzm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2c8ssquvfi1xvus4758h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--41F7gIzm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2c8ssquvfi1xvus4758h.png" alt="Image description" width="880" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Compared with hardware encoders, the compression rate is improved by more than 70%. That is, through the aforementioned system solution, streaming live 4K/8K FHD videos requires only 30% of the hardware encoding bitrate at the same image quality level; TSC can improve the subjective quality by more than 20% at the same bitrate.&lt;/p&gt;

&lt;p&gt;Inside each independent offline transcoding node along the linkage, video sources are decoded when they are received, and they are categorized by scene using different encoding policies. Scene detection is then performed, including noise detection and glitch detection, to analyze the noise and glitches in the video sources for subsequent encoding optimization. Before the encoding, the detected noise and glitches will be removed; after the image quality remastering of the video sources, perceptual encoding analysis is performed, where ROI areas in the image are analyzed, such as the face area and areas with complicated or simple textures. For those with complicated textures, some textures may be covered, and the bitrate can be reduced appropriately. For those with simple textures that are sensitive to the human eye, blocking artifacts will have a significant impact. In this case, the control analysis of perceptual encoding, or JND capabilities, can be used. Based on ROI and JND results, the encoder kernel can better assign the bitrates to the macroblocks during encoding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BKgVXiyQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rbp6cue9vvzy9rcl0rpz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BKgVXiyQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rbp6cue9vvzy9rcl0rpz.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Currently, many playback devices support 4K, but not all video sources are 4K. With Tencent Cloud’s media processing capabilities, video sources can be upgraded to 4K to deliver a truly 4K viewing experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c0Zd8s5A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjcmfuotfb4inmix3v0g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c0Zd8s5A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjcmfuotfb4inmix3v0g.png" alt="Image description" width="880" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A 4K FHD video is usually generated in the following steps. First, the video source is analyzed for noise, compression, and other distortion. Then, comprehensive data degradation is performed based on the analysis result, including noise removal, texture enhancement, and noise suppression. It is important to note that if certain parts of the image are well processed, such as areas containing human faces or text, which are more sensitive to the human eye, the overall viewing experience can be enhanced greatly.&lt;/p&gt;

&lt;p&gt;After detail enhancement, the color will be corrected. HDR capabilities are widely used in 4K/8K videos, and SDR to HDR conversion can be performed for many video sources with no HDR playback effects to deliver a high-resolution and truly vivid 4K effect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SRK5q0uE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qs8vxaqstg7le0hr7i88.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRK5q0uE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qs8vxaqstg7le0hr7i88.png" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;During video super-resolution, we cannot achieve the ideal effect by using only one model. Specifically, a general model can be used for the background or the entire image, and another model needs to be used for areas with faces and text. The two models should be combined to deliver the final enhancement effect. As the facial features are fixed and provide sufficient prior information for video super-resolution, dedicated efforts can be made to enhance this area to significantly improve the viewing experience.&lt;/p&gt;

</description>
      <category>fhd</category>
      <category>encoding</category>
      <category>livestreaming</category>
      <category>superresolutio</category>
    </item>
    <item>
      <title>Next-Gen Media SDK Solution Design (TRTC)</title>
      <dc:creator>Man yin Mandy Wong</dc:creator>
      <pubDate>Tue, 18 Oct 2022 06:09:16 +0000</pubDate>
      <link>https://dev.to/tencentcloud/next-gen-media-sdk-solution-design-trtc-i4e</link>
      <guid>https://dev.to/tencentcloud/next-gen-media-sdk-solution-design-trtc-i4e</guid>
      <description>&lt;p&gt;&lt;strong&gt;1. Immersive Convergence&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.1 Higher definition&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---nJqKQG0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hc3f0b2c1nam9llj6er4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---nJqKQG0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hc3f0b2c1nam9llj6er4.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;According to statistics from Tencent Cloud, the average bitrate of internet streaming media played on PCs, tablets, mobile phones, and other terminals has been increasing since H1 2018. As people require higher definition, the compression rate has also improved as the bitrate increases. This is due to the developments from H.264 and H.265 to the recent H.266 with its over 100 technical proposals, which delivers 50% higher compression rates than H.265.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.2 Stronger immersiveness&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rtftHwdG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kuv3gnwfqqmofjhjazin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rtftHwdG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kuv3gnwfqqmofjhjazin.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Advances have been made in the immersive experience of many applications, such as 3D guides, 3D modeling, AR/VR games, and multi-angle sports viewing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.3 Enhanced interaction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lkmsqfPS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9fm3xky5pidav7pr70fk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lkmsqfPS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9fm3xky5pidav7pr70fk.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Real-time interaction is stronger. In particular, face point cloud data is collected from a mobile phone and then sent back to the audience member's device from the cloud.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.4 Lower latency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xlz2PArq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pnmq94ra9wo7i04n857d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xlz2PArq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pnmq94ra9wo7i04n857d.png" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Latency has achieved the greatest improvement. A few years ago, the latency on webpages was counted in seconds, but now it is measured in milliseconds, low enough for users to sing duets together in live rooms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.5 Four elements of the all-true internet&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QyAKZFOh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ja2s05z88bf1221bgl11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QyAKZFOh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ja2s05z88bf1221bgl11.png" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The all-true internet features a higher definition, enhanced interaction, stronger immersiveness, and lower latency. But this entails challenges and unavoidable difficulties in the cloud and on the terminal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Technical Challenges&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's take a look at the challenges and how to overcome them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.1 Challenge 1: RT-Cube™ architecture design&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8twtaRdL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2l53as4w6ivtk7g13k97.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8twtaRdL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2l53as4w6ivtk7g13k97.png" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's hard to coordinate internal modules no matter what you are working on, from an operating system to something smaller like an SDK. An SDK has many modules. The image shows a simplified version of the SDK module architecture, but you can still imagine the large number of modules that are actually involved. The bottom-left corner shows audio/video engine modules, the bottom-right corner TIM modules, and the top TUI components. When multiple modules are working together, they tend to scramble for CPU resources and encounter other conflicts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tgeQiN4G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/67bjhy5ib3t8c8jeo53x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tgeQiN4G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/67bjhy5ib3t8c8jeo53x.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image above depicts the architecture design of the audio/video engine in RT-Cube™, which consists of many core modules with their respective submodules. Between those modules, there are much data communication and control logic. When the system runs stably, everything works well in unison. However, if the CPU frequency is reduced or the memory becomes insufficient, competition between modules will soon cause the entire system to crash. Therefore, a central control module is adapted to monitor and coordinate the modules in real-time and take intervention measures when necessary to better coordinate them and prevent an avalanche.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.2 Challenge 2: RT-Cube™ version management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The second challenge relates to versioning. Although we offer many features, not all of them are needed by each customer. When they are packaged into different combinations, we need to manage a larger number of versions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--frtNy2bu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fqnlb5g97ger2hr4dpi1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--frtNy2bu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fqnlb5g97ger2hr4dpi1.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If an SDK offers nine features, there are 510 possible combinations, which translates into 510 * 4 = 2,040 versions in total on four platforms.&lt;/p&gt;

&lt;p&gt;The traditional compilers such as Xcode and Android Studio are no longer applicable. A new platform with a compilation solution is needed to output SDKs for different platforms and allow for free combinations of features on different versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.3 Challenge 3: RT-Cube™ quality monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CsVYAjhc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k2ddvbd7ga8k4ozxcy0c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CsVYAjhc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k2ddvbd7ga8k4ozxcy0c.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The third challenge is quality monitoring. Imagine that six users are watching a live stream or on a video conference. In a period of 20 minutes, one of them experiences 10 seconds of lag, while the others experience no lag. According to the monitoring data, the lag rate is 0.13%, which cannot reflect the poor experience of 10-second lag. If the rate is counted based on the percentage of users experiencing a lag, the value will be 16.7%. Thus, poor performance data should be the focus of monitoring and product performance. To avoid being obscured by reported data, it is important to keep the infrastructure unchanged and have a data packet that includes lag, size, blur, and acoustic echo reported every day. The algorithm should be refined and based on user metrics to reflect the poor experience. The result will then be used to figure out the number of affected users, percentage increase or decrease, and cause. That's how we find a way to improve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.4 Challenge 4: Module communication efficiency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A4nVAXF3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wlrakp7ihb48dnrur0e6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A4nVAXF3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wlrakp7ihb48dnrur0e6.png" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The fourth challenge is the efficiency of communication between modules.&lt;/p&gt;

&lt;p&gt;This problem is common with games. Many enterprises unify their backend systems using SDP standards and microservice languages, but they cannot normalize iOS, Android, or Windows platforms simply through C++. Texture image formats, Android formats, and Windows D3D are processed differently on iOS. If C++ is applied, all of them are processed through binary buffers. A great deal of unification work has been done to ensure data performance across different platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Optimization and Improvement&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Having discussed challenges and solutions, we move on to the optimizations and improvements that have been made in half a year to one year after the completion of the infrastructure upgrade.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.1 Improvement 1: Audio module optimization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.1.1 Feature&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6FXvB0FF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qoegvwflwz84bvkoyucp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6FXvB0FF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qoegvwflwz84bvkoyucp.png" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the upgraded architecture, audio/video modules on the new version support many new capabilities, such as full-band audio, 3D audio effect, noise reduction based on deep learning and AI, and source and channel resistance. These capabilities enable many more challenging real-time interaction scenarios, for example, live duets which are highly sensitive to audio/video communication latency. In live music scenarios, music modes are optimized to restore signals as much as possible and achieve the highest possible resolution. In addition, a number of big data analysis means are leveraged to perform targeted monitoring and real-time analysis of sound problems, constantly reducing the failure rate and complaint rate by improving the audio quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.1.2 Use&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wglaILZC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lcmpnxrjwp2eojd0wgn4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wglaILZC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lcmpnxrjwp2eojd0wgn4.png" alt="Image description" width="880" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Audio modes are more diversified to make the product user-friendly. The speech mode is for conference communication, the default mode applies to most scenarios and can be enabled if you are not sure which mode is better, and the music mode is available for music listening. All the parameters can be customized.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.2 Improvement 2: Video module optimization - effect&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--STA8lUi5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pgxwxdjz54rrfhjhd49t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--STA8lUi5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pgxwxdjz54rrfhjhd49t.png" alt="Image description" width="880" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The video module is improved on the whole. Specifically, algorithms are improved for BT.601 and BT.709 color spaces, and BT.2020 and other HDR color spaces are supported. This makes images brighter. Targeted optimizations are also made to enhance the SDK definition without compromising the bitrate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.3 Improvement 3: Network module optimization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.3.1 Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dABaJjch--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1xhg0mx227ar1l5iwxvs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dABaJjch--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1xhg0mx227ar1l5iwxvs.png" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Last but not least is the network module with our core technology used to implement stream control and overall reconstruction. As shown above, the cloud and terminal are integrated into a system with coordinated modules. Several data-driven optimizations are performed on the central control module.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.3.2 Stream push&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I6nZn7gQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/amkr8affu88ztzgowbp0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I6nZn7gQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/amkr8affu88ztzgowbp0.png" alt="Image description" width="880" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a more detailed part of the network module for two scenarios: live streaming and communication. For live streaming, the upstream algorithm is mainly used for ensuring definition and smoothness. For RTC communication, such as Tencent Meeting or VooV Meeting, the focus is on real-timeness and smoothness to eliminate high latency and lag.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.3.3 Playback&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--InBVL8Ow--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4c4dg24ndxebaj47xdg5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--InBVL8Ow--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4c4dg24ndxebaj47xdg5.png" alt="Image description" width="880" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tencent Cloud delivers industry-leading playback performance in live streaming scenarios. It has a competitive CDN and has been constantly expanding into new scenarios, such as LEB. Besides standard browsers, LEB can use the SDK to deliver performance and effects in more formats at a latency of about one second, much better than browsers in demanding scenarios. In chat scenarios that require lower latency and stronger interaction, efforts can be made to smoothen mic-on/off.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.4 Improvement 4: TUI component library&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vH34CMRB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s020alrprdqexl1ib7u1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vH34CMRB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s020alrprdqexl1ib7u1.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The TUI component library is also upgraded and completed. Instead of keeping hundreds of APIs of professional PaaS components and putting up with an unsatisfactory final product, you can import the TUI library for each platform in a few minutes and with a few lines of code. You can build a proper UI similar to those shown above within hours, even if you have never tried it before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Summary&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P-J8jS8T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/afs7xc7hcvs7en71gxtz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P-J8jS8T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/afs7xc7hcvs7en71gxtz.png" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We've talked about the systematic design of component integration, where one plus one equals more than two.&lt;/p&gt;

&lt;p&gt;In the cloud, we've successfully integrated three networks, that is, TRTC network, IM network, and CDN network.&lt;/p&gt;

&lt;p&gt;On the terminal, existing features are continuously optimized in terms of stability and performance. For example, the squeeze theorem is applied in more scenarios and big data analysis cases to make the RTC SDK a leader in the industry in every respect. In addition, the LEB SDK and IM SDK with a new kernel will be integrated into the system to contribute to a powerful RT-Cube™ Media SDK architecture.&lt;/p&gt;

&lt;p&gt;Thanks to the TUI component library with ready-to-use UI output, a strong and easy-to-use PaaS system is in place to offer more basic capability components for the all-true internet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y8BAfvLO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lzm72gn13yi74odm2qpp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y8BAfvLO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lzm72gn13yi74odm2qpp.png" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The RT-Cube Media SDK can be downloaded from the website as shown above. Currently, common versions are available, and custom capabilities will be online as the compilation system becomes more robust. You can freely combine different features to get the desired version.&lt;/p&gt;

</description>
      <category>sdk</category>
      <category>trtc</category>
      <category>rtcube</category>
      <category>cloud</category>
    </item>
    <item>
      <title>GME 3D Voice Technology: High-Precision HRTF + Distance Attenuation Model</title>
      <dc:creator>Man yin Mandy Wong</dc:creator>
      <pubDate>Thu, 13 Oct 2022 02:45:33 +0000</pubDate>
      <link>https://dev.to/tencentcloud/gme-3d-voice-technology-high-precision-hrtf-distance-attenuation-model-oa4</link>
      <guid>https://dev.to/tencentcloud/gme-3d-voice-technology-high-precision-hrtf-distance-attenuation-model-oa4</guid>
      <description>&lt;p&gt;3D voice provides more auditory information for players to help them identify the positions of their teammates/enemies through voice and feel their presence much like in the physical world. This makes the gaming experience more convenient and fun.&lt;/p&gt;

&lt;p&gt;Many game developers may ask: How does 3D voice work? How do I add it to my games? Below is a quick guide to 3D voice technology.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. How do we determine sound source positions?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We can determine the position of a sound source mainly because the sound reaches the left and right ears at different times, and the strengths and other metrics are different, too. Specifically, we identify the horizontal position based on the differences in time, sound level, and timbre between binaural signals. The auricle acts as a comb filter to help identify the vertical position of a compound sound source. Sound localization also depends on such factors as sound level, spectrum, and personal experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. How are the voice positions of players simulated? How does Tencent Cloud GME work?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A head-related transfer function (HRTF) is needed to do so. It can be regarded as a comprehensive filtering process where sound signals travel from the sound source to both ears. The process includes air filtering, reverb in the ambient environment, scattering and reflection on the human body (such as torso, head, and auricle), etc.&lt;/p&gt;

&lt;p&gt;The implementation of the real-time 3D virtualization feature for voice is not merely about calling the HRTF. It also entails mapping the virtual space in the game to the real-life environment and performing high-frequency operations. The implementation process is summarized as follows. Assume there are N players connecting to the mic in a game. Given the high requirements for real-timeness in gaming, each player's terminal should receive at least (N-1) packets containing voice information and relative position information within a unit time of 20 ms in order to ensure a smooth gaming experience. Based on the relative position information, the high-precision HRTF model in the 3D audio algorithm is used to process the voice information, coupled with the information about the presence of obstacles in the way, ambient sounds in the game (such as the sound of running water and echo in a room), etc. In this way, realistic real-time 3D sound is rendered on the players' devices.&lt;/p&gt;

&lt;p&gt;The entire process is compute-intensive, and some low/mid-end devices may be unable to handle it. How to minimize resource usage on the players' devices while ensuring a smooth gaming experience remains an industry challenge. In addition, some HRTF libraries can result in serious attenuation for some frequencies in audio signals, most notably the musical instrument sounds with diverse frequency components. This not only affects the accuracy of sound localization but also dulls the instrument sounds in the output ambient sounds.&lt;/p&gt;

&lt;p&gt;Tencent Cloud Game Multimedia Engine (GME) launched the 3D voice feature in partnership with Tencent Ethereal Audio Lab, a top-notch audio technology team. Through the high-precision HRTF model and the distance attenuation model, the feature gives players a highly immersive gaming experience in the virtual world. Thanks to optimized terminal rendering algorithms, the computing efficiency increases by nearly 50%, and the real-time spatial rendering time of a single sound source is around 0.5 ms, so that most low/mid-end devices can sustain real-time 3D sound rendering. To address the problem of signal attenuation in the rendering process, GME improves the 3D rendering effect through its proprietary audio signal equalization techniques, making ambient sounds crystal clear.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. How do we integrate 3D voice?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are two 3D voice integration methods available. You can choose a suitable method based on the characteristics of your game.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 1:&lt;/strong&gt; For non-VR games&lt;/p&gt;

&lt;p&gt;How it works:&lt;/p&gt;

&lt;p&gt;As the implementation of 3D voice requires calculations based on the positions and distances of sound sources, position coordinates are needed as key data in order to achieve 3D sound effects. Based on the coordinates, we can identify the position in the virtual space, calculate the distance from the sound source, and get the position information.&lt;/p&gt;

&lt;p&gt;GME has streamlined the overall integration process. You only need to transfer the local coordinate information and position information to GME through the API. Then, GME will aggregate the data and calculate the coordinate information and position information of everyone in the room to get the 3D voice information.&lt;/p&gt;

&lt;p&gt;Now we already have the position information of each speaker in the room in the virtual world. In order to achieve a 3D sound effect, 3D sound needs to be created. The position information, together with the audio streams, reaches the voice-receiving client. Without position information, the sound would be played back without any sound effect, just like in a common phone call or conference call. By contrast, with position information and GME's local 3D voice model engine, a 3D sound effect can be achieved.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration steps:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Prerequisites:&lt;/p&gt;

&lt;p&gt;The "EnterRoom" API has been called, and the result in the room entry callback is successful room entry.&lt;/p&gt;

&lt;p&gt;On the premise of successful connection to the voice chat service, you can integrate 3D voice as instructed below:&lt;/p&gt;

&lt;p&gt;Call "InitSpatializer" to initialize the 3D sound effect engine.&lt;/p&gt;

&lt;p&gt;Call "EnableSpatializer" to enable 3D voice.&lt;/p&gt;

&lt;p&gt;Call "UpdateAudioRecvRange" to set the attenuation range.&lt;/p&gt;

&lt;p&gt;Call "UpdateSelfPosition" to update the position information in real time.&lt;/p&gt;

&lt;p&gt;Integration Guide: &lt;a href="https://cloud.tencent.com/document/product/607/18218"&gt;https://cloud.tencent.com/document/product/607/18218&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 2:&lt;/strong&gt; For VR games&lt;/p&gt;

&lt;p&gt;There is a dedicated integration method for VR games. As we have noticed, VR device users have high requirements for the refresh rate, sound responsiveness, and spatial perception of sound. In VR gaming scenarios that emphasize real-time interactions and deep immersion, a premium low-latency 3D voice experience is of paramount importance. However, the traditional RTC voice call and 3D voice solutions in the market fall short of players' expectations of accuracy, real-timeness, etc.&lt;/p&gt;

&lt;p&gt;How it works:&lt;/p&gt;

&lt;p&gt;We have further optimized the 3D voice feature for the GME SDK 2.9.2. You can directly call the 3D audio model to pass in the 3D position information in real time and therefore achieve a real-time 3D sound effect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration steps:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Prerequisites:&lt;/p&gt;

&lt;p&gt;The "EnterRoom" API has been called, and the result in the room entry callback is successful room entry.&lt;/p&gt;

&lt;p&gt;On the premise of successful connection to the voice chat service, you can integrate 3D voice as instructed below:&lt;/p&gt;

&lt;p&gt;Call "InitSpatializer" to initialize the 3D sound effect engine.&lt;/p&gt;

&lt;p&gt;Call "EnableSpatializer" to enable 3D voice.&lt;/p&gt;

&lt;p&gt;Call "UpdateAudioRecvRange" to set the attenuation range.&lt;/p&gt;

&lt;p&gt;Call "UpdateSelfPosition" to update the position information in real time.&lt;/p&gt;

&lt;p&gt;Call "UpdateOtherPosition" to update in real time the position information of others in the room (which can be obtained at the business layer).&lt;/p&gt;

&lt;p&gt;Read more at: &lt;a href="https://dev.tourl"&gt;https://www.tencentcloud.com/dynamic/blogs/sample-article/100365&lt;/a&gt;&lt;/p&gt;

</description>
      <category>3dvoice</category>
      <category>tutorial</category>
      <category>hrtf</category>
      <category>gme</category>
    </item>
    <item>
      <title>A Brief History of Game Voice</title>
      <dc:creator>Man yin Mandy Wong</dc:creator>
      <pubDate>Tue, 11 Oct 2022 03:42:32 +0000</pubDate>
      <link>https://dev.to/tencentcloud/a-brief-history-of-game-voice-2kh8</link>
      <guid>https://dev.to/tencentcloud/a-brief-history-of-game-voice-2kh8</guid>
      <description>&lt;p&gt;&lt;strong&gt;1.Background&lt;/strong&gt;&lt;br&gt;
Game voice tools have evolved with the development of the internet. The last 20+ years have witnessed huge leaps in game voice technology, from support for a single platform to cross-platform interoperability, from one-to-one chat to interactive voice chat in a room with tens of thousands of online users, from third-party voice communication SaaS tools to PaaS SDKs, and from monotonous voice chat to immersive voice experiences.&lt;br&gt;
Game voice technology has gone through several stages, starting from the most basic voice chat to immersive voice experiences and beyond. As breakthroughs in sensors, computing power, audio algorithms, IoT, and other technologies are on the horizon, all-real voice will eventually become a reality, delivering the ultimate voice experience the metaverse demands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Game voice v1.0: Third-party voice chat tools&lt;/strong&gt;&lt;br&gt;
At this stage, players use third-party voice chat tools to communicate with each other in the process of gaming. Whether the game itself offers a voice communication feature or not, using third-party tools allows players to quickly create chat channels and communicate with each other through voice chat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.Game voice v2.0: In-game voice&lt;/strong&gt;&lt;br&gt;
In-game voice solutions mainly take the form of game developers connecting SDKs developed by voice communication PaaS providers. The basic APIs that come with the SDKs are used to implement various in-game voice scenarios, such as channel voice between teammates (teammates can have a voice chat at any position coordinates in the game), range voice between different teams (players of different teams can hear each other only when their position coordinates in the game are within a specified range), as well as blocklist/allowlist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.Game voice v2.5: Upgraded version of in-game voice&lt;/strong&gt;&lt;br&gt;
To further improve players' game voice experiences, voice SDKs like GME offer voice processing capabilities such as voice changing and virtual 3D sound field. With these features, players can change their voice in real time based on their selected voice type, which adds fun to gaming and allows a vast design space for game voice features.&lt;br&gt;
Through the 3D virtualization technology, voice processing and gaming scenarios are combined, which, however, is are limited to position and distance information in gaming scenarios. For a truly immersive experience, voice processing should cover all aspects of gaming scenarios. A voice SDK is unlikely to provide a dedicated API for every potential factor; otherwise, the SDK would be extremely complicated and bulky, and that's not really necessary. To take the game voice experiences up a notch, we need a new solution, namely the immersive game voice solution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.Game voice v3.0: Immersive voice&lt;/strong&gt;&lt;br&gt;
An immersive voice solution means that players' voice effects are rendered in real time based entirely on the game process. All players' voices are processed through digital signal processing (DSP) algorithms, and then played back in the headphones to simulate voice communication in real-world settings. Voice chat processed in this way can deliver a more immersive game voice experience, allowing players to communicate in a natural way.&lt;br&gt;
Then, how is an immersive voice solution implemented? As mentioned above, it is not advisable to have a single voice SDK packed with all sorts of APIs. Moreover, voice service providers are generally not experts in audio processing algorithms compared with specialist audio technology companies. Therefore, to develop an all-encompassing voice SDK is virtually unviable.&lt;br&gt;
In view of this, a combination approach will work best, just as with the Wwise + GME solution. Tencent Cloud Game Multimedia Engine (GME) is dedicated to end-to-end real-time voice communication, and the Wwise interactive audio engine is adopted by many game developers as a tool for game sound design. The Wwise plugin acts as a bridge for data interactions between GME and the Wwise engine, and GME voice streams are seamlessly connected to the Wwise audio pipeline, so Wwise's rich sound effects processing and control features can be used in voice chat. Such a design makes it possible to deliver an immersive game voice experience.&lt;br&gt;
As an interactive audio authoring tool, Wwise is generally used to create high-quality audio content for games, and GME complements Wwise in the field of game voice. Now sound engineers can also use Wwise to create immersive and interesting voice features, opening up new gameplay possibilities.&lt;br&gt;
Immersive voice, however, is definitely not the acme of game voice experiences – all-real voice takes it further.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6.Game voice v4.0: All-real voice&lt;/strong&gt;&lt;br&gt;
With the advances in AR, VR, and MR technologies, the metaverse has become a hot button topic. Many technology giants are expanding into the metaverse, which is considered the next biggest opportunity in the realm of the internet in the coming decade. The metaverse refers to a parallel virtual world that is both independent of and interconnected with the real world, where people can interact, work, and do much more realistically.&lt;br&gt;
To make virtual worlds more lifelike, software and hardware technologies need to be integrated to simulate human senses. As voice communication is an important form of human interaction, metaverse scenarios have higher requirements for voice, that is, all-real voice. Currently, the metaverse is still more of a concept than reality, and we'll see what the future holds.&lt;br&gt;
Gaming is inherently a social activity in the internet age. Although voice chat is not a core feature for most game genres, it makes gaming more enjoyable and thus increases player retention. Therefore, it has become a common feature of online games.&lt;br&gt;
Game voice technology has evolved in response to players' growing demand for better experiences and gameplay. The development of game voice technology can be divided into four stages based on the improvements in game voice experiences. As players have higher expectations of gaming experiences, voice is bound to hold greater weight in gaming.&lt;/p&gt;

&lt;p&gt;Read more at: &lt;a href="https://www.tencentcloud.com/dynamic/blogs/sample-article/100361"&gt;https://www.tencentcloud.com/dynamic/blogs/sample-article/100361&lt;/a&gt;&lt;/p&gt;

</description>
      <category>gme</category>
      <category>gamevoice</category>
      <category>3dvoice</category>
      <category>voicechanging</category>
    </item>
  </channel>
</rss>
