<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: leo</title>
    <description>The latest articles on DEV Community by leo (@490583523leo).</description>
    <link>https://dev.to/490583523leo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/490583523leo"/>
    <language>en</language>
    <item>
      <title>Use of fully encrypted database</title>
      <dc:creator>leo</dc:creator>
      <pubDate>Wed, 29 Mar 2023 04:02:47 +0000</pubDate>
      <link>https://dev.to/490583523leo/use-of-fully-encrypted-database-269c</link>
      <guid>https://dev.to/490583523leo/use-of-fully-encrypted-database-269c</guid>
      <description>&lt;p&gt;The fully encrypted database is intended to solve the problem of privacy protection throughout the data life cycle, so that no matter what business scenarios and environments the system is in, the data is always in the state of ciphertext in all aspects of transmission, calculation and storage. After the data owner completes the data encryption on the client and sends it to the server, the attacker still cannot obtain effective value information when the attacker steals user data through system vulnerabilities, thereby protecting data privacy.&lt;/p&gt;

&lt;p&gt;Since the entire business data flow exists in the form of ciphertext during data processing, through the fully encrypted database, it can be realized:&lt;/p&gt;

&lt;p&gt;Protect the privacy and security of data throughout the life cycle on the cloud. No matter what state the data is in, the attacker cannot obtain valid information from the database server.&lt;br&gt;
Help cloud service providers gain third-party trust. Whether it is a business administrator, operation and maintenance administrator in the enterprise service scenario, or an application developer in the consumer cloud business, users hold the key in their own hands, so that high-privileged users cannot obtain valid data information.&lt;br&gt;
Allow cloud database services to better comply with laws and regulations on personal privacy protection with the help of full confidentiality capabilities.&lt;br&gt;
The fully encrypted database currently supports two connection methods: gsql connection and jdbc connection. The process of using the database under the two connection modes will be introduced in detail below.&lt;/p&gt;

&lt;p&gt;Connect to the fully confidential database&lt;br&gt;
GSQL connects to the database and executes the following command to enable the secret state switch:&lt;/p&gt;

&lt;p&gt;gsql -p PORT -d postgres -r -C&lt;br&gt;
Parameter Description:&lt;/p&gt;

&lt;p&gt;-p: port number&lt;br&gt;
-d: database name&lt;br&gt;
-C: It is to open the secret state switch.&lt;br&gt;
JDBC supports operations related to confidential databases, and you need to set enable_ce=1&lt;/p&gt;

&lt;p&gt;Create user key&lt;br&gt;
The fully encrypted database has two kinds of keys, namely the client master key CMK and the data encryption key CEK. CMK is used to encrypt CEK, and CEK is used to encrypt user data.&lt;/p&gt;

&lt;p&gt;The order and dependencies of key creation are: Create CMK &amp;gt; Create CEK.&lt;/p&gt;

&lt;p&gt;Create CMK and CEK in GSQL environment:&lt;/p&gt;

&lt;p&gt;【Create CMK】&lt;/p&gt;

&lt;p&gt;CREATE CLIENT MASTER KEY client_master_key_name WITH （KEY_STORE =key_store_name, KEY_PATH = "key_path_value", ALGORITHM = algorithm_type）;&lt;br&gt;
Parameter Description:&lt;/p&gt;

&lt;p&gt;client_master_key_name&lt;/p&gt;

&lt;p&gt;This parameter is used as the key object name, and must satisfy the naming uniqueness constraint under the same namespace.&lt;/p&gt;

&lt;p&gt;Value range: string, which must conform to the naming convention of identifiers.&lt;/p&gt;

&lt;p&gt;KEY_STORE&lt;/p&gt;

&lt;p&gt;Specifies the key tool or component that manages the CMK; value: currently only localkms is supported.&lt;/p&gt;

&lt;p&gt;KEY_PATH&lt;/p&gt;

&lt;p&gt;KEY_STORE is responsible for managing multiple CMK keys, and the KEY_PATH option is used to uniquely identify the CMK in KEY_STORE. The value is similar: "key_path_value".&lt;/p&gt;

&lt;p&gt;ALGORITHM&lt;/p&gt;

&lt;p&gt;The COLUMN ENCRYPTION KEY created by this syntax is used to specify the type of encryption algorithm. Value range: RSA_2048, RSA3072 and SM2.&lt;/p&gt;

&lt;p&gt;Note:  Key storage path: By default, localkms will generate/read/delete key files under the $LOCALKMS_FILE_PATH path, and users can manually configure this environment variable. However, users do not need to configure this environment variable separately. When trying to obtain $LOCALKMS_FILE_PATH fails, localkms will try to obtain the $GAUSSHOME/etc/localkms/ path. If the path exists, it will be used as the key storage path. Key-related file names: When using the CREATE CMK syntax, localkms will create four files related to storing keys. Example: When KEY_PATH = “key_path_value”, the names of the four files are key_path_value.pub, key_path_value.pub.rand, key_path_value.priv, key_path_value.priv.rand respectively. Therefore, in order to successfully create key-related files, it should be ensured that there is no existing file with the same name as the key-related file under the key storage path.&lt;/p&gt;

&lt;p&gt;【Create CEK】&lt;/p&gt;

&lt;p&gt;CREATE COLUMN ENCRYPTION KEY column_encryption_key_name WITH(CLIENT_MASTER_KEY = client_master_key_name, ALGORITHM = algorithm_type, ENCRYPTED_VALUE = encrypted_value);&lt;br&gt;
Parameter Description:&lt;/p&gt;

&lt;p&gt;column_encryption_key_name&lt;/p&gt;

&lt;p&gt;This parameter is used as the key object name, and must satisfy the naming uniqueness constraint under the same namespace.&lt;/p&gt;

&lt;p&gt;Value range: string, which must conform to the naming convention of identifiers.&lt;/p&gt;

&lt;p&gt;CLIENT_MASTER_KEY&lt;/p&gt;

&lt;p&gt;Specifies the CMK used to encrypt this CEK.&lt;/p&gt;

&lt;p&gt;The value is: CMK object name, the CMK object is created by CREATE CLIENT MASTER KEY syntax.&lt;/p&gt;

&lt;p&gt;ALGORITHM&lt;/p&gt;

&lt;p&gt;Specifies which encryption algorithm the CEK will be used for.&lt;/p&gt;

&lt;p&gt;The value range is: AEAD_AES_256_CBC_HMAC_SHA256, AEAD_AES_128_CBC_HMAC_SHA256 and SM4_SM3;&lt;/p&gt;

&lt;p&gt;ENCRYPTED_VALUE (optional)&lt;/p&gt;

&lt;p&gt;This value is the key password specified by the user, and the length of the key password ranges from 28 to 256 characters. The key security strength derived from 28 characters meets AES128. If the user needs to use AES256, the length of the key password needs to be 39 characters. If not specified, a 256-bit key will be automatically generated.&lt;/p&gt;

&lt;p&gt;Note:  Since algorithms such as SM2, SM3, and SM4 belong to the Chinese National Cryptography Standard Algorithm, they need to be used together to avoid legal risks. If you specify the SM4 algorithm to encrypt the CEK when creating the CMK, you must specify the SM4_SM3 algorithm to encrypt data when creating the CEK.&lt;/p&gt;

&lt;p&gt;【Example: In the GSQL environment:】&lt;/p&gt;

&lt;p&gt;-- 创建$GAUSSHOME/etc/localkms/路径&lt;br&gt;
mkdir -p $GAUSSHOME/etc/localkms/&lt;br&gt;
-- 使用特权账户，创建一个普通用户alice。&lt;br&gt;
 openGauss=# CREATE USER alice PASSWORD '********'; &lt;br&gt;
-- 使用普通用户alice的账户，连接密态数据库，并执行本语法。&lt;br&gt;
 gsql -p 57101 postgres -U alice -r -C &lt;br&gt;
-- 创建客户端加密主密钥（CMK）对象。&lt;br&gt;
 openGauss=&amp;gt; CREATE CLIENT MASTER KEY alice_cmk WITH (KEY_STORE = localkms , KEY_PATH = "key_path_value”, ALGORITHM = RSA_2048); &lt;br&gt;
-- 创建客户端列加密密钥（CEK）对象。&lt;br&gt;
 openGauss=&amp;gt; CREATE COLUMN ENCRYPTION KEY ImgCEK WITH VALUES (CLIENT_MASTER_KEY = alice_cmk, ALGORITHM  = AEAD_AES_256_CBC_HMAC_SHA256); &lt;br&gt;
Create CMK and CEK in JDBC environment:&lt;/p&gt;

&lt;p&gt;// 创建客户端主密钥&lt;br&gt;
 Connection conn = DriverManager.getConnection("url","user","password");Statement stmt = conn.createStatement();int rc = stmt.executeUpdate("CREATE CLIENT MASTER KEY ImgCMK1 WITH ( KEY_STORE = localkms , KEY_PATH = "key_path_value" , ALGORITHM = AES_256_CBC);"); &lt;/p&gt;

&lt;p&gt;// 创建列加密密钥&lt;br&gt;
 int rc2 = stmt.executeUpdate("CREATE COLUMN ENCRYPTION KEY ImgCEK1 WITH VALUES (CLIENT_MASTER_KEY = ImgCMK1, ALGORITHM  = AEAD_AES_256_CBC_HMAC_SHA256);");&lt;br&gt;
Create encrypted table&lt;br&gt;
After the client master key CMK and data encryption key CEK are created, the CEK can be used to create an encryption table. The creation of encrypted tables supports random encryption and deterministic encryption of encrypted columns.&lt;/p&gt;

&lt;p&gt;Create an encrypted table under the GSQL connection environment:&lt;/p&gt;

&lt;p&gt;【Example】&lt;/p&gt;

&lt;p&gt;openGauss=# CREATE TABLE creditcard_info (id_number int, name text encrypted with (column_encryption_key = ImgCEK, encryption_type = DETERMINISTIC),credit_card  varchar(19) encrypted with (column_encryption_key = ImgCEK, encryption_type = DETERMINISTIC));&lt;br&gt;
Parameter Description&lt;/p&gt;

&lt;p&gt;ENCRYPTION_TYPE is the encryption type in the ENCRYPTED WITH constraint, and the value of encryption_type_value is [ DETERMINISTIC | RANDOMIZED ].&lt;/p&gt;

&lt;p&gt;Create an encrypted table in the JDBC environment:&lt;/p&gt;

&lt;p&gt;int rc3 = stmt.executeUpdate("CREATE TABLE creditcard_info (id_number    int, name  varchar(50) encrypted with (column_encryption_key = ImgCEK, encryption_type = DETERMINISTIC),credit_card  varchar(19) encrypted with (column_encryption_key = ImgCEK1, encryption_type = DETERMINISTIC));");&lt;br&gt;
Insert data into the encrypted table and query&lt;br&gt;
After the encrypted table is created, you can insert data into the encrypted table and view the data in the encrypted database mode (connection parameter -C). When using the normal environment (turn off the connection parameter -C), the encrypted table cannot be operated, and only the ciphertext data can be seen when viewing the encrypted table.&lt;/p&gt;

&lt;p&gt;Insert data into the encrypted table in the GSQL environment and view:&lt;/p&gt;

&lt;p&gt;openGauss=# INSERT INTO creditcard_info VALUES (1,'joe','6217986500001288393'); &lt;br&gt;
 INSERT 0 1 &lt;br&gt;
 openGauss=# INSERT INTO creditcard_info VALUES (2, 'joy','6219985678349800033'); &lt;br&gt;
 INSERT 0 1 &lt;br&gt;
 openGauss=# select * from creditcard_info where name = 'joe'; &lt;br&gt;
  id_number | name |     credit_card &lt;br&gt;
 -----------+------+--------------------- &lt;br&gt;
          1 | joe  | 6217986500001288393 &lt;br&gt;
 (1 row)&lt;br&gt;
Explanation:  When using a non-confidential client to view the encrypted table data, it is ciphertext.&lt;/p&gt;

&lt;p&gt;openGauss=# select id_number,name from creditcard_info; &lt;br&gt;
  id_number |                name &lt;br&gt;
 -----------+------------------------------------------- &lt;br&gt;
          1 | \x011aefabd754ded0a536a96664790622487c4d36 &lt;br&gt;
          2 | \x011aefabd76853108eb406c0f90e7c773b71648f &lt;br&gt;
 (2 rows)&lt;br&gt;
Insert data into the encrypted table in the JDBC environment and view:&lt;/p&gt;

&lt;p&gt;// 插入数据&lt;br&gt;
 int rc4 = stmt.executeUpdate("INSERT INTO creditcard_info VALUES (1,'joe','6217986500001288393');"); &lt;br&gt;
 // 查询加密表&lt;br&gt;
 ResultSet rs = null; &lt;br&gt;
 rs = stmt.executeQuery("select * from creditcard_info where name = 'joe';"); &lt;br&gt;
 // 关闭语句对象&lt;br&gt;
 stmt.close();&lt;/p&gt;

</description>
      <category>opengauss</category>
    </item>
    <item>
      <title>Parallel query technology of openGauss</title>
      <dc:creator>leo</dc:creator>
      <pubDate>Wed, 29 Mar 2023 04:00:09 +0000</pubDate>
      <link>https://dev.to/490583523leo/parallel-query-technology-of-opengauss-1kg</link>
      <guid>https://dev.to/490583523leo/parallel-query-technology-of-opengauss-1kg</guid>
      <description>&lt;p&gt;parallel query&lt;br&gt;
The SMP parallel technology of openGauss is a technology that utilizes the multi-core CPU architecture of a computer to realize multi-thread parallel computing to make full use of CPU resources to improve query performance. In complex query scenarios, the execution of a single query is relatively long and the system concurrency is low. Using SMP parallel execution technology to achieve operator-level parallelism can effectively reduce query execution time and improve query performance and resource utilization. The overall implementation idea of ​​SMP parallel technology is to split the data into parallel query operators, start several worker threads to calculate separately, and finally summarize the results and return them to the front end. SMP parallel execution increases the data interaction operator (Stream) to realize data interaction between multiple worker threads, ensure the correctness of the query, and complete the overall query.&lt;/p&gt;

&lt;p&gt;Applicable scenarios and restrictions&lt;br&gt;
The SMP feature improves performance through operator parallelism, and at the same time occupies more system resources, including CPU, memory, I/O, and so on. In essence, SMP is a way of exchanging resources for time. In suitable scenarios and sufficient resources, it can achieve better performance improvement effects; but in inappropriate scenarios or insufficient resources, On the contrary, performance may be deteriorated. The SMP feature is suitable for analytical query scenarios, which are characterized by a long single query time and low business concurrency. The SMP parallel technology can reduce the query delay and improve the system throughput performance. However, in the transactional large concurrent business scenario, since the delay of a single query itself is very short, the use of multi-threaded parallel technology will increase the query delay and reduce the system throughput performance.&lt;/p&gt;

&lt;p&gt;Applicable scene&lt;/p&gt;

&lt;p&gt;Operators that support parallelism: The following operators support parallelism in the plan.&lt;/p&gt;

&lt;p&gt;Scan: supports row-stored ordinary table and row-stored partitioned table sequential scan, column-stored ordinary table and column-stored partitioned table sequential scan.&lt;br&gt;
Join: HashJoin, NestLoop&lt;br&gt;
Agg: HashAgg, SortAgg, PlainAgg, WindowAgg (only partition by is supported, order by is not supported).&lt;br&gt;
Stream: Local Redistribute, Local Broadcast&lt;br&gt;
Others: Result, Subqueryscan, Unique, Material, Setop, Append, VectoRow&lt;br&gt;
SMP-specific operators: In order to achieve parallelism, a new stream operator for data exchange between parallel threads is added for use by SMP features. These newly added operators can be regarded as subclasses of the Stream operator.&lt;/p&gt;

&lt;p&gt;Local Gather: implements data aggregation of parallel threads within the instance.&lt;br&gt;
Local Redistribute: Data redistribution is performed according to the distribution key among the threads within the instance.&lt;br&gt;
Local Broadcast: Broadcast data to each thread inside the instance.&lt;br&gt;
Local RoundRobin: Realize data round-robin distribution among threads within the instance.&lt;br&gt;
As an example, take the parallel plan of TPCH Q1 as an example.&lt;/p&gt;

&lt;p&gt;In this plan, the parallelization of Scan and HashAgg operators is realized, and the Local Gather data exchange operator is added. Among them, operator No. 3 is the Local Gather operator, and the "dop: 1/4" marked on it indicates that the parallelism degree of the sending end thread of this operator is 4, while the parallelism degree of the receiving end thread is 1, that is, the lower level 4 The No. HashAggregate operator is executed at 4 degrees of parallelism, while the No. 1 and No. 2 operators on the upper layer are executed serially, and the No. 3 operator realizes the data aggregation of the parallel threads in the instance.&lt;/p&gt;

&lt;p&gt;The parallel status of each operator can be seen by planning the dop information displayed on the Stream operator.&lt;/p&gt;

&lt;p&gt;Non-applicable scene&lt;/p&gt;

&lt;p&gt;Index scans do not support parallel execution.&lt;br&gt;
MergeJoin does not support parallel execution.&lt;br&gt;
WindowAgg order by does not support parallel execution.&lt;br&gt;
cursor does not support parallel execution.&lt;br&gt;
Queries within stored procedures and functions do not support parallel execution.&lt;br&gt;
The parallelism of subquery subplan and initplan, and the parallelism of operators containing subqueries are not supported.&lt;br&gt;
Queries with median operations in query statements do not support parallel execution.&lt;br&gt;
Queries with global temporary tables do not support parallel execution.&lt;br&gt;
Updates to materialized views do not support parallel execution.&lt;br&gt;
Resource Impact on SMP Performance&lt;br&gt;
The SMP architecture is a scheme that uses surplus resources in exchange for time. After planning parallelism, resource consumption will inevitably increase, including resource consumption such as CPU, memory, and I/O. Larger, resource consumption also increases. When the above resources become a bottleneck, SMP cannot improve performance, but may cause the overall performance of the database instance to deteriorate. The impact of various resources on the performance of the SMP will be described separately below.&lt;/p&gt;

&lt;p&gt;CPU resources&lt;/p&gt;

&lt;p&gt;In general customer scenarios, when the system CPU utilization rate is not high, using the SMP parallel architecture can make full use of system CPU resources and improve system performance. However, when the number of CPU cores of the database server is small and the CPU utilization rate is already relatively high, if SMP parallelism is enabled, not only the performance improvement will not be obvious, but the performance may deteriorate due to resource competition among multiple threads.&lt;/p&gt;

&lt;p&gt;memory resources&lt;/p&gt;

&lt;p&gt;Parallel query will increase memory usage, but the upper limit of memory used by each operator is still limited by parameters such as work_mem. Assuming that work_mem is 4GB and the degree of parallelism is 2, the upper limit of memory allocated to each parallel thread is 2GB. When the work_mem is small or the system memory is not sufficient, after using SMP parallelism, the data may be downloaded to the disk, resulting in the problem of degraded query performance.&lt;/p&gt;

&lt;p&gt;I/O resources&lt;/p&gt;

&lt;p&gt;To achieve parallel scanning will definitely increase the resource consumption of I/O, so only in the case of sufficient I/O resources, parallel scanning can improve scanning performance.&lt;/p&gt;

&lt;p&gt;Effects of Other Factors on SMP Performance&lt;br&gt;
In addition to resource factors, there are other factors that also affect SMP parallel performance. For example, factors such as uneven partition data in the partition table, and system concurrency.&lt;/p&gt;

&lt;p&gt;Impact of Data Skew on SMP Performance&lt;/p&gt;

&lt;p&gt;Parallelism is less effective when there is severe data skew in the data. For example, the data volume of a certain value in the join column of a table is much larger than that of other values. After parallelism is enabled, hash redistribution is performed on the table data according to the value of the join column, so that the data volume of a certain parallel thread is much larger than that of other threads, resulting in The long tail problem leads to poor results after parallelism.&lt;/p&gt;

&lt;p&gt;Influence of system concurrency on SMP performance&lt;/p&gt;

&lt;p&gt;The SMP feature will increase the use of resources, and there are fewer resources left in high-concurrency scenarios. Therefore, if SMP parallelism is enabled in a high-concurrency scenario, it will cause serious resource competition among queries. Once resource competition occurs, whether it is CPU, I/O, or memory, it will lead to a decline in overall performance. Therefore, in high-concurrency scenarios, enabling SMP often cannot achieve the effect of performance improvement, and may even cause performance degradation.&lt;/p&gt;

&lt;p&gt;configuration steps&lt;br&gt;
Observe the current system load, if the system resources are sufficient (resource utilization is less than 50%), execute 2; otherwise, exit.&lt;/p&gt;

&lt;p&gt;Set query_dop=1 (default value), use explain to display the execution plan, and observe whether the plan meets the applicable scenarios in the applicable scenarios and restrictions. If so, go to 3.&lt;/p&gt;

&lt;p&gt;Set query_dop=value, regardless of resource conditions and plan features, force dop to be 1 or value.&lt;/p&gt;

&lt;p&gt;Set the appropriate query_dop value before executing the qualified query statement, and close query_dop after the statement execution ends. Examples are as follows.&lt;/p&gt;

&lt;p&gt;openGauss=# SET query_dop = 4;&lt;br&gt;
openGauss=# SELECT COUNT(*) FROM t1 GROUP BY a;&lt;br&gt;
......&lt;br&gt;
openGauss=# SET query_dop = 1;&lt;br&gt;
illustrate:&lt;/p&gt;

&lt;p&gt;In the case of resource permitting, the higher the degree of parallelism, the better the performance improvement effect.&lt;br&gt;
SMP parallelism supports session-level settings. It is recommended that customers enable smp before executing queries that meet the requirements, and turn off smp after execution. In order to avoid impact on the business during business peaks.&lt;/p&gt;

</description>
      <category>opengauss</category>
    </item>
    <item>
      <title>OpenGauss database master and backup shared storage function</title>
      <dc:creator>leo</dc:creator>
      <pubDate>Wed, 29 Mar 2023 03:58:23 +0000</pubDate>
      <link>https://dev.to/490583523leo/opengauss-database-master-and-backup-shared-storage-function-2e4</link>
      <guid>https://dev.to/490583523leo/opengauss-database-master-and-backup-shared-storage-function-2e4</guid>
      <description>&lt;p&gt;Primary and secondary shared storage&lt;br&gt;
Feature introduction&lt;br&gt;
The feature of active-standby shared storage mainly provides the ability for the active and standby machines to share a storage, provides a new HA deployment form of active-standby shared storage, solves the problem of doubling the storage capacity of a single machine under traditional HA deployment, and satisfies the need to reduce storage capacity and cost demands. Optionally, OCK RDMA can be used to improve the real-time consistent reading capability of the standby machine.&lt;/p&gt;

&lt;p&gt;Architecture introduction&lt;br&gt;
The overall architecture diagram of the active and standby shared storage is shown below.&lt;/p&gt;

&lt;p&gt;Figure 1  Architecture diagram of active and standby shared storage&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tUpPa73q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wm9q590xej7kizbn4z1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tUpPa73q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wm9q590xej7kizbn4z1d.png" alt="Image description" width="456" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The disk array device and the ultrapath multipath software have been installed, and the disk array device is available.&lt;/p&gt;

&lt;p&gt;Distributed Storage Service DSS (Distributed Storage Service)&lt;/p&gt;

&lt;p&gt;DSS is an independent process that directly manages disk array raw devices and provides external capabilities similar to distributed file systems. Through the shared memory and the client API dynamic library, the database provides the ability to create files, delete files, expand and shrink files, and read and write files.&lt;/p&gt;

&lt;p&gt;Distributed Memory Service DMS (Distributed Memory Service)&lt;/p&gt;

&lt;p&gt;DMS is a dynamic library, which is integrated in the database, transmits the page content through the TCP/RDMA network, integrates the primary and backup memory, and provides memory pooling capabilities, so as to realize the real-time consistent reading function of the standby machine.&lt;/p&gt;

&lt;p&gt;The active/standby page exchange is accelerated by RDMA, depends on the CX5 network card, and depends on the OCK RDMA dynamic library.&lt;/p&gt;

&lt;p&gt;Features&lt;br&gt;
The master and backup share one data, which significantly reduces the storage capacity of traditional HA.&lt;br&gt;
The log replication function is removed between the master and backup, and the master and backup page exchange function is added. The backup supports real-time consistent reading.&lt;br&gt;
By default, real-time page exchange is performed between the master and backup through the TCP network. In order to reduce the delay of page exchange, the OCK RDMA dynamic library method of introducing calculation is provided to improve the real-time consistency performance of the standby machine.&lt;br&gt;
Applicable scenarios and restrictions&lt;br&gt;
The storage device is required to be a disk array, and the LUN of the disk array needs to support the SCSI3 PR protocol (including PR OUT ("PERSISTENT RESERVE OUT") PR IN ("PERSISTENT RESERVE IN") and INQUIRY), which is used to implement cluster IO FENCE; in addition In addition, it is also necessary to support the CAW protocol (COMPARE AND WRITE) of SCSI3, which is used to realize the shared disk lock. Such as Dorado 5000 V3 disk array equipment.&lt;br&gt;
Eight backup machines can be deployed under openGauss.&lt;br&gt;
Since the active and standby shared storage relies on functions similar to the distributed file system to realize the real-time consistent read capability of the standby machine, it is required that the file metadata should be changed as little as possible. Based on performance considerations, this feature only supports segment page tables.&lt;br&gt;
Only active and standby deployments are supported on the same disk array device, disaster recovery deployment is not supported, and active and standby mixed deployments are not supported (for example, active and standby are deployed on different disk array devices).&lt;br&gt;
The active/standby page exchange is accelerated by RDMA, relies on the CX5 network card, and relies on the OCK RDMA dynamic library provided by Compute.&lt;br&gt;
Currently does not support backup machine reconstruction, node replacement, node repair and other capabilities.&lt;br&gt;
Upgrades from traditional HA deployments to deployments based on active and standby shared storage are not supported.&lt;/p&gt;

</description>
      <category>opengauss</category>
    </item>
    <item>
      <title>Example of manual installation of active and standby shared storage（openGauss）</title>
      <dc:creator>leo</dc:creator>
      <pubDate>Wed, 29 Mar 2023 03:56:08 +0000</pubDate>
      <link>https://dev.to/490583523leo/example-of-manual-installation-of-active-and-standby-shared-storageopengauss-2fnb</link>
      <guid>https://dev.to/490583523leo/example-of-manual-installation-of-active-and-standby-shared-storageopengauss-2fnb</guid>
      <description>&lt;p&gt;Example of manual installation&lt;br&gt;
Compared with the traditional gs_initdb database construction, shared storage database construction divides the directory into three types, each instance is exclusive and not shared, each instance is exclusive and shared, and all instances are shared. The directories that need to be shared need to be stored on the disk array device, and the directories that are not shared should be stored on the local disk. In addition, to build a database on the standby machine, you only need to create its own directory, and you don't need to create a directory structure shared by all instances again. Added relevant GUC parameters for active and standby shared storage, and switched the system table storage mode from page to segment page.&lt;/p&gt;

&lt;p&gt;Note:  The single-step manual installation here means that after the project is compiled, the relevant commands can be executed in a single step to build a shared storage library. For the general installation of openGauss, please refer to the "Installation Guide".&lt;/p&gt;

&lt;p&gt;Preconditions&lt;/p&gt;

&lt;p&gt;The project has completed code compilation.&lt;br&gt;
The host has mounted the disk array LUN device and installed the ultrapath multipath software, and the disk array device is available.&lt;br&gt;
Steps&lt;/p&gt;

&lt;p&gt;Create a soft link for the drive letter of the disk array LUN and assign corresponding user permissions (assuming that the drive letter corresponding to the disk array LUN is /dev/sde, /dev/sdf).&lt;/p&gt;

&lt;p&gt;sudo ln -s /dev/sde /dev/tpcc_data&lt;br&gt;
sudo ln -s /dev/sdf /dev/tpcc_log&lt;br&gt;
sudo chmod 777 /dev/tpcc_data&lt;br&gt;
sudo chmod 777 /dev/tpcc_log&lt;br&gt;
Authorize executable files that require disk array RAWIO permissions.&lt;/p&gt;

&lt;p&gt;sudo -i setcap CAP_SYS_RAWIO+ep 绝对路径/perctrl&lt;br&gt;
perctrl: An executable tool used to grant read and write permissions to dss-related tools and processes.&lt;/p&gt;

&lt;p&gt;Create the DSS server process and the configuration files required to create a shared repository.&lt;/p&gt;

&lt;p&gt;Test directory (assumed to be /data/test)&lt;/p&gt;

&lt;p&gt;└─dss_home/&lt;br&gt;
├── cfg&lt;br&gt;
│   ├── dss_inst.ini&lt;br&gt;
│   └── dss_vg_conf.ini&lt;br&gt;
└── log // 启动前需存在log目录&lt;br&gt;
dss_init.ini configuration content is as follows:&lt;/p&gt;

&lt;p&gt;INST_ID=0&lt;br&gt;
_LOG_LEVEL=55&lt;br&gt;
_LOG_BACKUP_FILE_COUNT=128&lt;br&gt;
_LOG_MAX_FILE_SIZE =20M&lt;br&gt;
LSNR_PATH=/data/test/dss_home&lt;br&gt;
STORAGE_MODE=RAID&lt;br&gt;
_SHM_KEY=12&lt;br&gt;
The parameters in the above configuration are described as follows:&lt;/p&gt;

&lt;p&gt;INST_ID configures the instance number, the value range is [0, 63], and the dssserver processes under each active and standby are different.&lt;br&gt;
_LOG_LEVEL log level.&lt;br&gt;
_LOG_BACKUP_FILE_COUNT The maximum number of log files to keep.&lt;br&gt;
_LOG_MAX_FILE_SIZE is the maximum size of a single log file.&lt;br&gt;
LSNR_PATH is the directory saved by the domain socket used for communication between the DSS client and the server, and is generally set as the home directory of the DSS server process.&lt;br&gt;
STORAGE_MODE is the storage device type corresponding to DSS, and the disk array is configured as RAID.&lt;br&gt;
_SHM_KEY shared memory KEY, it is necessary to ensure that each DSS is different.&lt;br&gt;
dss_vg_conf.ini configuration content is as follows,&lt;/p&gt;

&lt;p&gt;data:/dev/tpcc_data&lt;br&gt;
 log: /dev/tpcc_log&lt;br&gt;
Indicates that the contents of the +data directory are stored on the /dev/tpcc_data device, and the contents of the +log directory are stored on the /dev/tpcc_log device. It should be noted here that it is agreed whether there is a + character in the root directory name to distinguish whether it is a file in the file system or a file in DSS. Users can treat DSS as a distributed file system.&lt;/p&gt;

&lt;p&gt;Use the DSS client tool (dsscmd) to initialize the VG on the disk array device (similar to the operation of initializing a file system on a bare disk).&lt;/p&gt;

&lt;h1&gt;
  
  
  清空磁阵LUN开头数据
&lt;/h1&gt;

&lt;p&gt;dd if=/dev/zero bs=2048 count=100000 of=/dev/tpcc_data&lt;br&gt;
dd if=/dev/zero bs=2048 count=100000 of=/dev/tpcc_log&lt;/p&gt;

&lt;h1&gt;
  
  
  创建VG
&lt;/h1&gt;

&lt;p&gt;dsscmd cv -g data -v /dev/tpcc_data -s 2048 -D /data/ss_test/dss_home&lt;br&gt;
dsscmd cv -g log  -v /dev/tpcc_log -s 65536 -D /data/ss_test/dss_home&lt;/p&gt;

&lt;h1&gt;
  
  
  拉起dssserver
&lt;/h1&gt;

&lt;p&gt;dssserver -D /data/ss_test/dss_home &amp;amp;&lt;br&gt;
Create a shared repository via gs_initdb.&lt;/p&gt;

&lt;p&gt;gs_initdb -D /data/ss_test/dn_primary --nodename=single_node -w Gauss_234 --vgname="+data,+log" --enable-dss --dms_url="0:127.0.0.1:1611,1:127.0.0.1:1711" -I 0 --socketpath="UDS:/data/ss_test/dss_home/.dss_unix_d_socket"&lt;br&gt;
Among them, 5 related parameters are added:&lt;/p&gt;

&lt;p&gt;–vgname volume group name, which specifies the volume group under which the shared repository is built. This name is related to the configuration items in the dss_vg_conf.ini file. The volume group name needs to appear in the configuration file and start with a '+' character.&lt;br&gt;
–enable_dss indicates that the shared repository should be built into DSS.&lt;br&gt;
–dms_url "0:127.0.0.1:1611,1:127.0.0.1:1711", the format is instance_id:ip:port.&lt;br&gt;
-I specifies the instance number of the current node, and the value range is [0,63].&lt;br&gt;
–socketpath specifies the location where the unix domain socket used for communication between the DSS client (here, the DSS client dynamic library integrated into the database-related executable file) and the server is stored.&lt;br&gt;
The shared repository is built successfully, and the database process is pulled through the gs_ctl start command.&lt;/p&gt;

&lt;p&gt;gs_ctl start -D /data/ss_test/dn_primary&lt;br&gt;
Follow the steps above and re-install the standby machine.&lt;/p&gt;

</description>
      <category>opengauss</category>
    </item>
    <item>
      <title>Example of using OCK RDMA for primary and secondary shared storage(openGauss)</title>
      <dc:creator>leo</dc:creator>
      <pubDate>Wed, 29 Mar 2023 03:54:48 +0000</pubDate>
      <link>https://dev.to/490583523leo/example-of-using-ock-rdma-for-primary-and-secondary-shared-storageopengauss-11cj</link>
      <guid>https://dev.to/490583523leo/example-of-using-ock-rdma-for-primary-and-secondary-shared-storageopengauss-11cj</guid>
      <description>&lt;p&gt;The shared storage feature provides the real-time consistent read function of the standby machine, and the page exchange between the active and standby machines can be accelerated through RDMA. This chapter briefly describes how to enable the RDMA communication function in shared storage.&lt;/p&gt;

&lt;p&gt;Preconditions&lt;/p&gt;

&lt;p&gt;Enabling the RDMA function depends on the CX5 network card and the OCK RPC dynamic library.&lt;/p&gt;

&lt;p&gt;Steps&lt;/p&gt;

&lt;p&gt;Get the OCK RPC package.&lt;/p&gt;

&lt;p&gt;Obtained from the designated openGauss interface person.&lt;/p&gt;

&lt;p&gt;Deploy the OCK RPC package.&lt;/p&gt;

&lt;p&gt;Decompress the OCK RPC package.&lt;/p&gt;

&lt;p&gt;tar -zxf OCK_platform_rpc_22.0.0_EulerOS-aarch64.tar.gz&lt;br&gt;
cd OCK_platform_rpc_22.0.0_EulerOS-aarch64_release&lt;br&gt;
tar -zxf OCK_platform_rpc_EulerOS_aarch64.tar.gz&lt;br&gt;
tar -zxf OCK_platform_rpc_sbin.tar.gz&lt;br&gt;
cp rpc_sbin/ock_rpc_opengauss.sh ./&lt;br&gt;
Modify the parameters of the deployment script ock_rpc_opengauss.sh.&lt;/p&gt;

&lt;h1&gt;
  
  
  Deployment user, which is the same as the opengauss user.
&lt;/h1&gt;

&lt;h1&gt;
  
  
  It is a non-root user. Ensure that this user exists on all servers.
&lt;/h1&gt;

&lt;p&gt;USER_NAME="${USER}"&lt;/p&gt;

&lt;h1&gt;
  
  
  Deployment server, which can contain native
&lt;/h1&gt;

&lt;p&gt;HOST_IP=(30.30.xx.7 30.30.xx.8)&lt;/p&gt;

&lt;h1&gt;
  
  
  Deployment package. The package name starts with OCK_platform_rpc.
&lt;/h1&gt;

&lt;p&gt;PACKAGES="OCK_platform_rpc_aarch64_EulerOS.tar.gz"&lt;/p&gt;

&lt;h1&gt;
  
  
  Specifies the path from the SCP package to the server.
&lt;/h1&gt;

&lt;h1&gt;
  
  
  If the path does not exist, the path is automatically created.
&lt;/h1&gt;

&lt;p&gt;PATH_TO_SCP="/home/ock/test"&lt;/p&gt;

&lt;h1&gt;
  
  
  Opengauss environment variable
&lt;/h1&gt;

&lt;p&gt;GAUSSHOME="/home/ock/mppdb_temp_install"&lt;/p&gt;

&lt;h1&gt;
  
  
  OCK_RPC environment variable. Do not change
&lt;/h1&gt;

&lt;p&gt;OCK_RPC_LIB_PATH="${GAUSSHOME}/lib"&lt;br&gt;
The parameters involved in the script are as follows:&lt;/p&gt;

&lt;p&gt;USER_NAME: The user name of the machine where the DN is located.&lt;br&gt;
HOST_IP: cluster machine ip, including local machine.&lt;br&gt;
PACKAGES: The obtained ock rpc binary package name (matches the architecture of the machine operating system).&lt;br&gt;
PATH_TO_SCP: copy path.&lt;br&gt;
GAUSSHOME: Consistent with the environment variable GAUSSHOME configured by openGauss.&lt;br&gt;
OCK_RPC_LIB_PATH: The path where the ock rpc binary is located. The shared storage will be used by dynamic loading. Please make sure to configure it correctly.&lt;br&gt;
Execute the deployment script ock_rpc_opengauss.sh.&lt;/p&gt;

&lt;p&gt;sh ock_rpc_opengauss.sh&lt;br&gt;
 Description:  Realize the function:&lt;/p&gt;

&lt;p&gt;Complete the deployment, decompression and copying of the OCK RPC binary package in the cluster.&lt;br&gt;
Configure the relevant environment variable $OCK_RPC_LIB_PATH.&lt;br&gt;
Configure the UCX environment variable of the cluster node. (UCX is a communication component developed based on RDMA, and OCK RPC is a communication component developed on the basis of UCX and adapted to shared storage.)&lt;br&gt;
Configure other parameter environment variables.&lt;/p&gt;

&lt;p&gt;export UCX_ZCOPY_THRESH="32768"&lt;br&gt;
export UCX_USE_MT_MUTEX=n&lt;br&gt;
export UCX_NET_DEVICES=mlx5_1:1&lt;br&gt;
Parameter Description:&lt;/p&gt;

&lt;p&gt;UCX_ZCOPY_THRESH optimizes parameters to reduce memory application during RDMA transmission. Unit: byte. Default value: 32768. (optional parameter, default value suggested).&lt;/p&gt;

&lt;p&gt;UCX_USE_MT_MUTEX optimization parameter, the type of data consistency lock during RDMA transmission, n: atomic lock; y: mutex lock. (Optional parameter, it is recommended to use atomic lock, which has better performance).&lt;/p&gt;

&lt;p&gt;UCX_NET_DEVICES, UCX configuration parameter, used to specify the network card port, you can install the mlnx_ofed driver, and use the ibdev2netdev command to view the current RDMA active port. (It is recommended to specify an active port).&lt;/p&gt;

&lt;p&gt;Note:  After configuring the environment variables, execute source ~/.bashrc&lt;/p&gt;

&lt;p&gt;Execute 1~5 to complete gs_initdb database building.&lt;/p&gt;

&lt;p&gt;Modify the postgresql.conf configuration file.&lt;/p&gt;

&lt;p&gt;Configure ss_interconnect_type=RDMA, add configuration items ss_rdma_work_config and ss_ock_log_path.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;ss_interconnect_type=RDMA&lt;br&gt;
ss_rdma_work_config ='6 10'     （OCK RDMA使用用户态poll方式，并绑定cpu [6 10]，空格分开）&lt;br&gt;
ss_ock_log_path=“/home/ock_test/log”(OCK RDMA 消息通信过程日志记录)&lt;br&gt;
Execute the gs_ctl start command to pull the database process.&lt;/p&gt;

&lt;p&gt;gs_ctl start -D /data/ss_test/dn_primary&lt;/p&gt;

</description>
    </item>
    <item>
      <title>ODBC configuration file parameter description (openGauss)</title>
      <dc:creator>leo</dc:creator>
      <pubDate>Tue, 28 Mar 2023 03:35:24 +0000</pubDate>
      <link>https://dev.to/490583523leo/odbc-configuration-file-parameter-description-opengauss-dmi</link>
      <guid>https://dev.to/490583523leo/odbc-configuration-file-parameter-description-opengauss-dmi</guid>
      <description>&lt;ol&gt;
&lt;li&gt;Configure the data source file:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Add the following content to the "/usr/local/etc/odbc.ini" file.&lt;/p&gt;

&lt;p&gt;[MPPODBC]&lt;br&gt;
Driver=GaussMPP&lt;br&gt;
Servername=10.145.130.26 (database Server IP)&lt;br&gt;
Database=postgres (database name)&lt;br&gt;
Username=omm (database user name)&lt;br&gt;
Password= (database user password)&lt;br&gt;
Port=8000 (database listening port)&lt;br&gt;
Sslmode=allow&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Description of odbc.ini file configuration parameters:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;parameter&lt;/p&gt;

&lt;p&gt;describe&lt;/p&gt;

&lt;p&gt;example&lt;/p&gt;

&lt;p&gt;[DSN]&lt;/p&gt;

&lt;p&gt;The name of the data source.&lt;/p&gt;

&lt;p&gt;[MPPODBC]&lt;/p&gt;

&lt;p&gt;driver&lt;/p&gt;

&lt;p&gt;Driver name, corresponding to DriverName in odbcinst.ini.&lt;/p&gt;

&lt;p&gt;Driver=DRIVER_N&lt;/p&gt;

&lt;p&gt;Servername&lt;/p&gt;

&lt;p&gt;The IP address of the server.&lt;/p&gt;

&lt;p&gt;Servername=10.145.130.26&lt;/p&gt;

&lt;p&gt;Database&lt;/p&gt;

&lt;p&gt;The name of the database to connect to.&lt;/p&gt;

&lt;p&gt;Database=postgres&lt;/p&gt;

&lt;p&gt;Username&lt;/p&gt;

&lt;p&gt;Database user name.&lt;/p&gt;

&lt;p&gt;Username=omm&lt;/p&gt;

&lt;p&gt;Password&lt;/p&gt;

&lt;p&gt;Database user password.&lt;/p&gt;

&lt;p&gt;Password=&lt;/p&gt;

&lt;p&gt;illustrate:&lt;/p&gt;

&lt;p&gt;The ODBC driver itself has cleaned up the memory password to ensure that the user password will not be kept in the memory after connection.&lt;/p&gt;

&lt;p&gt;However, if this parameter is configured, because UnixODBC caches data source files, etc., the password may be kept in memory for a long time.&lt;/p&gt;

&lt;p&gt;It is recommended to pass the password to the corresponding API instead of writing it in the data source configuration file when the application connects. At the same time, after the connection is successful, the memory segment storing the password should be cleaned up in time.&lt;/p&gt;

&lt;p&gt;port&lt;/p&gt;

&lt;p&gt;The port number of the server.&lt;/p&gt;

&lt;p&gt;Port=8000&lt;/p&gt;

&lt;p&gt;Sslmode&lt;/p&gt;

&lt;p&gt;Enable SSL mode&lt;/p&gt;

&lt;p&gt;Sslmode=allow&lt;/p&gt;

&lt;p&gt;Debug&lt;/p&gt;

&lt;p&gt;When set to 1, the mylog driven by psqlodbc will be printed, and the log generation directory is /tmp/. When set to 0, it will not be generated.&lt;/p&gt;

&lt;p&gt;Debug=1&lt;/p&gt;

&lt;p&gt;UseServerSidePrepare&lt;/p&gt;

&lt;p&gt;Whether to enable the extended query protocol on the database side.&lt;/p&gt;

&lt;p&gt;The optional value is 0 or 1, and the default is 1, which means to open the extended query protocol.&lt;/p&gt;

&lt;p&gt;UseServerSidePrepare=1&lt;/p&gt;

&lt;p&gt;UseBatchProtocol&lt;/p&gt;

&lt;p&gt;Whether to enable the batch query protocol (opening can improve DML performance); the optional value is 0 or 1, and the default is 1.&lt;/p&gt;

&lt;p&gt;When this value is 0, the bulk query protocol is not used (mainly for communication compatibility with earlier database versions).&lt;/p&gt;

&lt;p&gt;When this value is 1, and the database support_batch_bind parameter exists and is on, the batch query protocol will be turned on.&lt;/p&gt;

&lt;p&gt;UseBatchProtocol=1&lt;/p&gt;

&lt;p&gt;ForExtensionConnector&lt;/p&gt;

&lt;p&gt;This switch controls whether the savepoint is sent, and you can pay attention to this switch for savepoint-related issues.&lt;/p&gt;

&lt;p&gt;ForExtensionConnector=1&lt;/p&gt;

&lt;p&gt;UnnamedPrepStmtThreshold&lt;/p&gt;

&lt;p&gt;Every time SQLFreeHandle is called to release Stmt, ODBC will send a Deallocate plan_name statement to the server, and there are many such statements in the business. In order to reduce the sending of such statements, we set stmt-&amp;gt;plan_name to be empty, so that the database recognizes this as unnamed stmt. Increase this parameter to control the threshold of unnamed stmt.&lt;/p&gt;

&lt;p&gt;UnnamedPrepStmtThreshold=100&lt;/p&gt;

&lt;p&gt;ConnectionExtraInfo&lt;/p&gt;

&lt;p&gt;The GUC parameter connection_info displays the switch of the driver deployment path and the process owner user.&lt;/p&gt;

&lt;p&gt;ConnectionExtraInfo=1&lt;/p&gt;

&lt;p&gt;illustrate:&lt;/p&gt;

&lt;p&gt;The default value is 0. When set to 1, the ODBC driver will report the current driver deployment path and process owner user to the database, and record it in the connection_info parameter; at the same time, it can be queried in PG_STAT_ACTIVITY.&lt;/p&gt;

&lt;p&gt;BoolAsChar&lt;/p&gt;

&lt;p&gt;If set to Yes, Bools values ​​will be mapped to SQL_CHAR. If not set, it will be mapped to SQL_BIT.&lt;/p&gt;

&lt;p&gt;BoolsAsChar = Yes&lt;/p&gt;

&lt;p&gt;Row Versioning&lt;/p&gt;

&lt;p&gt;When trying to update a row of data, setting it to Yes will allow the application to detect whether the data has been modified by another user.&lt;/p&gt;

&lt;p&gt;RowVersioning=Yes&lt;/p&gt;

&lt;p&gt;ShowSystemTables&lt;/p&gt;

&lt;p&gt;The driver will default the system tables to normal SQL tables.&lt;/p&gt;

&lt;p&gt;ShowSystemTables=Yes&lt;/p&gt;

</description>
      <category>opengauss</category>
    </item>
    <item>
      <title>Connect to openGauss database (JAVA)</title>
      <dc:creator>leo</dc:creator>
      <pubDate>Tue, 28 Mar 2023 03:33:57 +0000</pubDate>
      <link>https://dev.to/490583523leo/connect-to-opengauss-database-java-1mn0</link>
      <guid>https://dev.to/490583523leo/connect-to-opengauss-database-java-1mn0</guid>
      <description>&lt;p&gt;Java&lt;br&gt;
JDBC (Java Database Connectivity, Java database connection) is a Java API for executing SQL statements, which can provide a unified access interface for various relational databases, and applications can operate data based on it. The openGauss library provides support for JDBC 4.0 features. It needs to use JDK1.8 version to compile the program code, and does not support the JDBC bridging ODBC method.&lt;/p&gt;

&lt;p&gt;Execute build.sh in the source code directory of the Linux server to obtain the driver jar package postgresql.jar, which is located in the source code directory. Obtained from the release package, the package name is openGauss-xxxx-operating system version number-64bit-Jdbc.tar.gz.&lt;/p&gt;

&lt;p&gt;The driver package is compatible with PostgreSQL, and the class name and class structure are completely consistent with the PostgreSQL driver. Applications that used to run on PostgreSQL can be directly transplanted to the current system for use.&lt;/p&gt;

&lt;p&gt;load driver&lt;br&gt;
Before creating a database connection, the database driver class needs to be loaded, and the package locations of different driver classes are different. The driver of openGauss jdbc is "org.opengauss.Driver", and the url prefix is ​​"jdbc:opengauss".&lt;/p&gt;

&lt;p&gt;Connect to the database&lt;br&gt;
Before connecting to the database, add the host address (bold and underlined) to pg_hba.conf (this file is under the datanode folder in the installation directory), and the format is as follows:&lt;/p&gt;

&lt;p&gt;host    all    all    127.0.0.1/32    sha256&lt;br&gt;
JDBC provides three methods for creating database connections.&lt;/p&gt;

&lt;p&gt;There are two common ways to execute statements: Statement and PreparedStatement:&lt;/p&gt;

&lt;p&gt;Statement:&lt;/p&gt;

&lt;p&gt;execute: The return value is boolean type, and the return value cannot be seen when the query statement is executed.&lt;br&gt;
executeQuery: The return value is the ResultSet type, which is usually used for query, and the method of use is to directly write SQL statements in it.&lt;br&gt;
PreparedStatement:&lt;/p&gt;

&lt;p&gt;Contains two methods of Statement, but they are used differently. You must first write the statement in conn.prepareStatement(), and then execute execute or executeQuery as needed. The input parameters in these two methods are empty. If there are variables in the SQL statement, it is recommended to use PreparedStatement to effectively prevent SQL injection. Use ? to replace the variable, and then use the setObject method to assign it.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;Note:  In the example below, the bold underlined fields need to be replaced with the user's own information, among which:&lt;/p&gt;

&lt;p&gt;127.0.0.1 : The host address where the database is located.&lt;br&gt;
8080 : The port for the database connection.&lt;br&gt;
test : The name of the database to connect to. If the database does not exist, an error will be reported. Please confirm whether the database exists before using it.&lt;br&gt;
myuser : The username to connect to the database.&lt;br&gt;
myPassWord : The password of the user connecting to the database.&lt;br&gt;
DriverManager.getConnection(String url);&lt;/p&gt;

&lt;p&gt;This connection method requires splicing the username and password on the url.&lt;/p&gt;

&lt;p&gt;import java.sql.Connection;&lt;br&gt;
import java.sql.DriverManager;&lt;br&gt;
import java.sql.PreparedStatement;&lt;br&gt;
import java.sql.ResultSet;&lt;br&gt;
import java.sql.SQLException;&lt;br&gt;
import java.sql.Statement;&lt;br&gt;
import java.util.Properties;&lt;/p&gt;

&lt;p&gt;public class JdbcConn {&lt;br&gt;
  public static void main(String[] args) {&lt;br&gt;
        getConnect();&lt;br&gt;
  }&lt;br&gt;
  public static Connection getConnect() {&lt;br&gt;
        String driver = "org.opengauss.Driver";&lt;br&gt;
        String sourceURL = "jdbc:opengauss://127.0.0.1:8080/test?user=myuser&amp;amp;password=myPassWord";&lt;br&gt;
        Properties info = new Properties();&lt;br&gt;
        Connection conn = null;&lt;br&gt;
        try {&lt;br&gt;
            Class.forName(driver);&lt;br&gt;
        } catch (Exception var9) {&lt;br&gt;
            var9.printStackTrace();&lt;br&gt;
            return null;&lt;br&gt;
        }&lt;br&gt;
        try {&lt;br&gt;
            conn = DriverManager.getConnection(sourceURL);&lt;br&gt;
            System.out.println("连接成功！");&lt;br&gt;
            return conn;&lt;br&gt;
        } catch (Exception var8) {&lt;br&gt;
            var8.printStackTrace();&lt;br&gt;
            return null;&lt;br&gt;
        }&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
DriverManager.getConnection(String url, Properties info);&lt;/p&gt;

&lt;p&gt;Parameters such as user name and password of this method are added through setProperty in the instance of the Properties object.&lt;/p&gt;

&lt;p&gt;import java.sql.Connection;&lt;br&gt;
import java.sql.DriverManager;&lt;br&gt;
import java.sql.PreparedStatement;&lt;br&gt;
import java.sql.ResultSet;&lt;br&gt;
import java.sql.SQLException;&lt;br&gt;
import java.sql.Statement;&lt;br&gt;
import java.util.Properties;&lt;/p&gt;

&lt;p&gt;public class JdbcConn {&lt;br&gt;
  public static void main(String[] args) {&lt;br&gt;
        getConnect();&lt;br&gt;
  }&lt;br&gt;
 public static Connection getConnect() {&lt;br&gt;
        String driver = "org.opengauss.Driver";&lt;br&gt;
        String sourceURL = "jdbc:opengauss://127.0.0.1:8080/test";&lt;br&gt;
        Properties info = new Properties();&lt;br&gt;
        info.setProperty("user","myuser");&lt;br&gt;
        info.setProperty("password","myPassWord");&lt;br&gt;
        Connection conn = null;&lt;br&gt;
        try {&lt;br&gt;
            Class.forName(driver);&lt;br&gt;
        } catch (Exception var9) {&lt;br&gt;
            var9.printStackTrace();&lt;br&gt;
            return null;&lt;br&gt;
        }&lt;br&gt;
        try {&lt;br&gt;
            conn = DriverManager.getConnection(sourceURL, info);&lt;br&gt;
            System.out.println("连接成功！");&lt;br&gt;
            return conn;&lt;br&gt;
        } catch (Exception var8) {&lt;br&gt;
            var8.printStackTrace();&lt;br&gt;
            return null;&lt;br&gt;
        }&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
DriverManager.getConnection(String url, String user, String password);&lt;/p&gt;

&lt;p&gt;This method requires username and password to be entered as variables.&lt;/p&gt;

&lt;p&gt;import java.sql.Connection;&lt;br&gt;
import java.sql.DriverManager;&lt;br&gt;
import java.sql.PreparedStatement;&lt;br&gt;
import java.sql.ResultSet;&lt;br&gt;
import java.sql.SQLException;&lt;br&gt;
import java.sql.Statement;&lt;br&gt;
import java.util.Properties;&lt;/p&gt;

&lt;p&gt;public class JdbcConn {&lt;br&gt;
  public static void main(String[] args) {&lt;br&gt;
        getConnect();&lt;br&gt;
  }&lt;br&gt;
 public static Connection getConnect() {&lt;br&gt;
        String driver = "org.opengauss.Driver";&lt;br&gt;
        String sourceURL = "jdbc:opengauss://127.0.0.1:8080/test";&lt;br&gt;
        String username="myuser";&lt;br&gt;
        String passwd="myPassWord";&lt;br&gt;
        Connection conn = null;&lt;br&gt;
        try {&lt;br&gt;
            Class.forName(driver);&lt;br&gt;
        } catch (Exception var9) {&lt;br&gt;
            var9.printStackTrace();&lt;br&gt;
            return null;&lt;br&gt;
        }&lt;br&gt;
        try {&lt;br&gt;
            conn = DriverManager.getConnection(sourceURL, username, passwd);&lt;br&gt;
            System.out.println("连接成功！");&lt;br&gt;
            return conn;&lt;br&gt;
        } catch (Exception var8) {&lt;br&gt;
            var8.printStackTrace();&lt;br&gt;
            return null;&lt;br&gt;
        }&lt;br&gt;
    }&lt;br&gt;
}&lt;/p&gt;

</description>
      <category>opngauss</category>
    </item>
    <item>
      <title>Connect to openGauss database (Python)</title>
      <dc:creator>leo</dc:creator>
      <pubDate>Tue, 28 Mar 2023 03:31:59 +0000</pubDate>
      <link>https://dev.to/490583523leo/connect-to-opengauss-database-python-2kg2</link>
      <guid>https://dev.to/490583523leo/connect-to-opengauss-database-python-2kg2</guid>
      <description>&lt;p&gt;Python&lt;br&gt;
Psycopg is a Python API for executing SQL statements. It can provide a unified access interface for PostgreSQL and openGauss databases, and applications can perform data operations based on it. Psycopg2 encapsulates libpq, and some codes are implemented in C language, which is efficient and safe. It has client-side and server-side cursors, asynchronous communication and notifications, and supports "COPY TO/COPY FROM" functions. Supports multiple types of Python out of the box, and adapts to PostgreSQL data types; through a flexible object adaptation system, it can be extended and customized. Psycopg2 is Unicode and Python 3 compatible.&lt;/p&gt;

&lt;p&gt;The openGauss database provides support for Psycopg2 features, and supports psycopg2 connections through SSL mode.&lt;/p&gt;

&lt;p&gt;Table 1  Psycopg supported platforms&lt;/p&gt;

&lt;p&gt;operating system&lt;/p&gt;

&lt;p&gt;platform&lt;/p&gt;

&lt;p&gt;EulerOS 2.5&lt;/p&gt;

&lt;p&gt;x86_64 bit&lt;/p&gt;

&lt;p&gt;EulerOS 2.8&lt;/p&gt;

&lt;p&gt;ARM64 bit&lt;/p&gt;

&lt;p&gt;load driver&lt;br&gt;
Before using the driver, you need to do the following:&lt;/p&gt;

&lt;p&gt;Download the compiled psycopg2 compressed package from the openGauss official website.&lt;/p&gt;

&lt;p&gt;Note:  The version of psycopg2 downloaded from the openGauss official website is adapted to Python3.6. If you use other versions of Python, you need to compile psycopg2 yourself. The compilation method is basically the same as that under the PostgreSQL database. Just need to modify the code of the setup.py version number verification part during compilation, which can be realized by the following command:&lt;/p&gt;

&lt;p&gt;sed -i "s/(pgmajor, pgminor, pgpatch)/(9, 2, 4)/g" setup.py&lt;br&gt;
Unzip the driver package corresponding to the version, and copy psycopg2 to the third-party package folder (namely the site-packages directory) of the python installation directory.&lt;/p&gt;

&lt;p&gt;Make sure that the permission of the psycopg2 directory is at least 755, so as not to prompt that the file cannot be accessed due to permission issues when calling.&lt;/p&gt;

&lt;p&gt;For non-database users, you need to configure the decompressed lib directory in the LD_LIBRARY_PATH environment variable.&lt;/p&gt;

&lt;p&gt;Before creating a database connection, the following database drivers need to be loaded:&lt;/p&gt;

&lt;p&gt;import  psycopg2&lt;br&gt;
Connect to the database&lt;br&gt;
The following Python code shows how to connect to an existing database. If the database does not exist, it will be created automatically, and finally a database object will be returned.&lt;/p&gt;

&lt;h1&gt;
  
  
  !/usr/bin/python
&lt;/h1&gt;

&lt;p&gt;import psycopg2&lt;br&gt;
conn = psycopg2.connect(database="testdb", user="openGauss", password="xxxxxxxx", host="127.0.0.1", port="26000")&lt;br&gt;
In the above code, please replace the bold font according to the specific situation. Specify here to use testdb as the database name.&lt;/p&gt;

</description>
      <category>opengauss</category>
    </item>
    <item>
      <title>openGauss container installation</title>
      <dc:creator>leo</dc:creator>
      <pubDate>Tue, 28 Mar 2023 03:30:12 +0000</pubDate>
      <link>https://dev.to/490583523leo/opengauss-container-installation-4bp3</link>
      <guid>https://dev.to/490583523leo/opengauss-container-installation-4bp3</guid>
      <description>&lt;p&gt;container installation&lt;br&gt;
This chapter mainly introduces the installation of openGauss through Docker to facilitate the installation, configuration and environment settings of DevOps users.&lt;/p&gt;

&lt;p&gt;Supported Architectures and OS Versions&lt;br&gt;
x86-64 CentOS 7.6&lt;/p&gt;

&lt;p&gt;ARM64 openEuler 20.03 LTS&lt;/p&gt;

&lt;p&gt;configuration preparation&lt;br&gt;
Build the docker image using the buildDockerImage.sh script, which is a handy shell script that provides SHA256 checks.&lt;/p&gt;

&lt;p&gt;Create openGauss docker image&lt;br&gt;
illustrate:&lt;/p&gt;

&lt;p&gt;Before installation, you need to provide the openGauss binary installation package. After decompression, put the package ending in bz2 (openGauss-XXX-CentOS-64bit.tar.bz2) into the dockerfiles/ folder. The binary package can be downloaded from  &lt;a href="https://www.opengauss.org/zh/download/"&gt;https://www.opengauss.org/zh/download/&lt;/a&gt; , make sure you have the correct yum source.&lt;/p&gt;

&lt;p&gt;When running the buildDockerImage.sh script, if the -i parameter is not specified, the SHA256 check is provided by default, and you need to manually write the check result to the sha256_file_amd64 file.&lt;/p&gt;

&lt;h2&gt;
  
  
  修改sha256校验文件内容
&lt;/h2&gt;

&lt;p&gt;cd /soft/openGauss-server/docker/dockerfiles/3.0.0&lt;br&gt;
sha256sum openGauss-3.0.0-CentOS-64bit.tar.bz2 &amp;gt; sha256_file_amd64 &lt;br&gt;
Before installation, you need to obtain the openEuler_aarch64.repo file from the Huawei open source image site, and put it in the openGauss-server-master/docker/dockerfiles/3.0.0 folder. openEuler_aarch64.repo acquisition method:&lt;br&gt;
wget -O openEuler_aarch64.repo &lt;a href="https://mirrors.huaweicloud.com/repository/conf/openeuler_aarch64.repo"&gt;https://mirrors.huaweicloud.com/repository/conf/openeuler_aarch64.repo&lt;/a&gt;&lt;br&gt;
Run the buildDockerImage.sh script in the dockerfiles folder.&lt;/p&gt;

&lt;p&gt;[root@ecs-complie dockerfiles]# ./buildDockerImage.sh&lt;br&gt;
Usage: buildDockerImage.sh -v [version] [-i] [Docker build option]&lt;br&gt;
Builds a Docker Image for openGauss&lt;br&gt;
Parameters:&lt;br&gt;
   -v: version to build&lt;br&gt;
       Choose one of: 3.0.0&lt;br&gt;
   -i: ignores the SHA256 checksums&lt;/p&gt;

&lt;p&gt;LICENSE UPL 1.0&lt;br&gt;
environment variable&lt;br&gt;
In order to use the openGauss image more flexibly, additional parameters can be set. In the future, we will expand more controllable parameters. The current version supports the setting of the following variables.&lt;/p&gt;

&lt;p&gt;GS_PASSWORD&lt;/p&gt;

&lt;p&gt;When using the openGauss image, this parameter must be set. The parameter value cannot be empty or undefined. This parameter sets the passwords of the super user omm of the openGauss database and the test user gaussdb. The omm super user is created by default when openGauss is installed, and this user name cannot be modified temporarily. The test user gaussdb is a user created in entrypoint.sh.&lt;/p&gt;

&lt;p&gt;The openGauss image is configured with a local trust mechanism, so no password is required to connect to the database in the container, but if you want to connect from outside the container (other hosts or other containers), you must enter a password.&lt;/p&gt;

&lt;p&gt;The password of openGauss has complexity requirements&lt;/p&gt;

&lt;p&gt;The password must be more than 8 characters long and must contain both uppercase letters, lowercase letters, numbers, and special symbols (special symbols only include "#?!@$%^&amp;amp;*-", and "!$&amp;amp;" needs to be escaped "\" to escape).&lt;/p&gt;

&lt;p&gt;GS_NODENAME&lt;/p&gt;

&lt;p&gt;Specify the database node name, the default is gaussdb.&lt;/p&gt;

&lt;p&gt;GS_USERNAME&lt;/p&gt;

&lt;p&gt;Specify the database connection user name, the default is gaussdb.&lt;/p&gt;

&lt;p&gt;GS_PORT&lt;/p&gt;

&lt;p&gt;Specify the database port, the default is 5432.&lt;/p&gt;

&lt;p&gt;start instance&lt;br&gt;
$ docker run --name opengauss --privileged=true -d -e GS_PASSWORD=Enmo@123 opengauss:2.0.0&lt;br&gt;
Connect to the database from the operating system level&lt;br&gt;
$ docker run --name opengauss --privileged=true -d -e GS_PASSWORD=Enmo@123 -p8888:5432 opengauss:2.0.0&lt;br&gt;
$ gsql -d postgres -U gaussdb -W'Enmo@123' -h your-host-ip -p8888&lt;br&gt;
data persistence&lt;br&gt;
$ docker run --name opengauss --privileged=true -d -e GS_PASSWORD=Enmo@123 -v /opengauss:/var/lib/opengauss opengauss:2.0.0&lt;br&gt;
illustrate&lt;/p&gt;

&lt;p&gt;For building container images using databases of other versions, you can refer to openGauss-server/docker/dockerfiles/3.0.0the configuration file inside, and only need to modify it to the corresponding version number.&lt;/p&gt;

&lt;p&gt;For the container image for building openEuler-arm, if openeuler-20.03-lts:latestthe image cannot be downloaded, you can &lt;a href="http://121.36.97.194/openEuler-20.03-LTS/docker_img/aarch64/download"&gt;http://121.36.97.194/openEuler-20.03-LTS/docker_img/aarch64/download&lt;/a&gt; the compressed container image package from the openEuler official website openEuler-docker.aarch64.tar.xz, and then use docker load -i openEuler-docker.aarch64.tar.xzit to import it to the local image list.&lt;/p&gt;

&lt;p&gt;docker buildWhen building, if you encounter a yum source download timeout, please check the agent, or you can add --network hostthe network of the host machine after the command in the buildDockerImage.sh script .&lt;/p&gt;

</description>
      <category>opengauss</category>
    </item>
    <item>
      <title>openGauss RPM installation</title>
      <dc:creator>leo</dc:creator>
      <pubDate>Tue, 28 Mar 2023 03:27:47 +0000</pubDate>
      <link>https://dev.to/490583523leo/opengauss-rpm-installation-224k</link>
      <guid>https://dev.to/490583523leo/opengauss-rpm-installation-224k</guid>
      <description>&lt;p&gt;RPM installation&lt;br&gt;
This chapter mainly introduces the one-click installation of the openGauss database through the yum command on the openEuler 22.03 LTS operating system.&lt;/p&gt;

&lt;p&gt;Supported Architectures and OS Versions&lt;br&gt;
x86-64 openEuler 22.03 LTS&lt;br&gt;
ARM64 openEuler 22.03 LTS&lt;br&gt;
Only supported on openEuler 22.03 LTS&lt;/p&gt;

&lt;p&gt;usage restrictions&lt;br&gt;
Currently, yum installation is only supported on the openEuler 22.03 LTS operating system, and both arm64 and x86_64 architectures are supported.&lt;br&gt;
The database integrated into the openEuler system is built based on the capabilities of the openGauss lightweight version.&lt;br&gt;
Only the stand-alone database instance is installed in the RPM mode. During the upgrade, only the binary version can be replaced, and grayscale upgrade is not supported.&lt;br&gt;
The default installation instance listens to the 127.0.0.1:7654 address and port. If you need to connect remotely, you need to manually modify the listen_address in the postgresql.conf file.&lt;br&gt;
The installation database creates the user openGauss by default, and the user is not deleted after the database is uninstalled.&lt;br&gt;
installation method&lt;br&gt;
After the operating system is installed, use yum install to install it.&lt;/p&gt;

&lt;p&gt;yum install opengauss -y&lt;/p&gt;

&lt;p&gt;During the installation of the operating system, the software selects openGauss, and the openGauss database is installed by default when the operating system is installed.&lt;/p&gt;

&lt;p&gt;Instructions for use&lt;br&gt;
Switch to the opengauss user su - opengauss&lt;/p&gt;

&lt;p&gt;Check the process  ps ux, you can see that the binary installation directory /usr/local/opengaussis under the directory, and the default startup data directory is /var/lib/opengauss/dataunder the directory.&lt;/p&gt;

&lt;p&gt;Database connection  gsql -d postgres -p 7654 -r, the default port of the database is 7654. After connecting to the database, the database can be used normally.&lt;/p&gt;

</description>
      <category>opengauss</category>
    </item>
    <item>
      <title>Uninstallation of openGauss</title>
      <dc:creator>leo</dc:creator>
      <pubDate>Tue, 28 Mar 2023 03:26:27 +0000</pubDate>
      <link>https://dev.to/490583523leo/uninstallation-of-opengauss-4lai</link>
      <guid>https://dev.to/490583523leo/uninstallation-of-opengauss-4lai</guid>
      <description>&lt;p&gt;Uninstall openGauss&lt;br&gt;
The process of uninstalling openGauss includes uninstalling openGauss and cleaning the environment of the openGauss server.&lt;/p&gt;

&lt;p&gt;perform uninstall&lt;br&gt;
openGauss provides an uninstall script to help users completely uninstall openGauss.&lt;/p&gt;

&lt;p&gt;Steps&lt;/p&gt;

&lt;p&gt;Log in to the active database node as the operating system user omm.&lt;/p&gt;

&lt;p&gt;Use gs_uninstall to uninstall openGauss.&lt;/p&gt;

&lt;p&gt;gs_uninstall --delete-data&lt;br&gt;
Or perform local offload per node in openGauss.&lt;/p&gt;

&lt;p&gt;gs_uninstall --delete-data -L&lt;br&gt;
example&lt;/p&gt;

&lt;p&gt;Use the gs_uninstall script to uninstall openGauss.&lt;/p&gt;

&lt;p&gt;gs_uninstall --delete-data&lt;br&gt;
Checking uninstallation.&lt;br&gt;
Successfully checked uninstallation.&lt;br&gt;
Stopping the cluster.&lt;br&gt;
Successfully stopped the cluster.&lt;br&gt;
Successfully deleted instances.&lt;br&gt;
Uninstalling application.&lt;br&gt;
Successfully uninstalled application.&lt;br&gt;
Uninstallation succeeded.&lt;br&gt;
In the stand-alone uninstallation scenario, use the gs_uninstall script to uninstall.&lt;/p&gt;

&lt;p&gt;gs_uninstall --delete-data&lt;br&gt;
Checking uninstallation.&lt;br&gt;
Successfully checked uninstallation.&lt;br&gt;
Stopping the cluster.&lt;br&gt;
Successfully stopped the cluster.&lt;br&gt;
Successfully deleted instances.&lt;br&gt;
Uninstalling application.&lt;br&gt;
Successfully uninstalled application.&lt;br&gt;
Uninstallation succeeded.&lt;br&gt;
Troubleshooting&lt;/p&gt;

&lt;p&gt;If the uninstallation fails, troubleshoot the error according to the log information in "$GAUSSLOG/om/gs_uninstall-YYYY-MM-DD_HHMMSS.log".&lt;/p&gt;

&lt;p&gt;One-click environment cleanup&lt;br&gt;
After the openGauss uninstallation is complete, if you do not need to redeploy openGauss on the environment, you can run the script gs_postuninstall to clean up the environment information on the openGauss server. The openGauss environment cleanup is to clean up the settings made by the environment preparation script gs_preinstall.&lt;/p&gt;

&lt;p&gt;prerequisite&lt;/p&gt;

&lt;p&gt;The uninstallation of openGauss is successful.&lt;br&gt;
Mutual trust between root users is available.&lt;br&gt;
Only the root user can execute the gs_postuninstall command.&lt;br&gt;
Steps&lt;/p&gt;

&lt;p&gt;Log in to the openGauss server as the root user.&lt;/p&gt;

&lt;p&gt;Check whether the root user mutual trust is established. If the root user has not established mutual trust, you need to manually establish the root user mutual trust. For details, see Manually Establishing Mutual Trust.&lt;/p&gt;

&lt;p&gt;Check whether the mutual trust is successfully established, and you can ssh the host name to each other. Enter exit to exit.&lt;/p&gt;

&lt;p&gt;plat1:~ # ssh plat2 &lt;br&gt;
Last login: Tue Jan  5 10:28:18 2016 from plat1 &lt;br&gt;
Huawei's internal systems must only be used for conducting Huawei's business or for purposes authorized by Huawei management.Use is subject to audit at any time by Huawei management. &lt;br&gt;
plat2:~ # exit &lt;br&gt;
logout &lt;br&gt;
Connection to plat2 closed. &lt;br&gt;
plat1:~ # &lt;br&gt;
Enter the script path.&lt;/p&gt;

&lt;p&gt;cd /opt/software/openGauss/script&lt;br&gt;
Use gs_postuninstall to clean up. If the database is installed in the mode of environment variable separation, the source environment variable separation file ENVFILE is required.&lt;/p&gt;

&lt;p&gt;./gs_postuninstall -U omm -X /opt/software/openGauss/cluster_config.xml --delete-user --delete-group&lt;br&gt;
Or perform local post cleanup per node in openGauss.&lt;/p&gt;

&lt;p&gt;./gs_postuninstall -U omm -X /opt/software/openGauss/cluster_config.xml --delete-user --delete-group -L&lt;br&gt;
omm is the user name of the operating system running openGauss, and /opt/software/openGauss/cluster_config.xml is the path of the openGauss configuration file.&lt;/p&gt;

&lt;p&gt;If the database is installed in the mode of environment variable separation, the env parameter of the previous environment variable separation of source needs to be deleted.&lt;/p&gt;

&lt;p&gt;unset MPPDB_ENV_SEPARATE_PATH&lt;br&gt;
Delete the mutual trust of the root user of each node in the openGauss database. For the operation, refer to the manual establishment of mutual trust to delete the mutual trust of the root user.&lt;/p&gt;

&lt;p&gt;Log off the root user.&lt;/p&gt;

&lt;p&gt;example&lt;/p&gt;

&lt;p&gt;Clean up the host's environment.&lt;/p&gt;

&lt;p&gt;gs_postuninstall -U omm -X /opt/software/openGauss/cluster_config.xml --delete-user&lt;br&gt;
Parsing the configuration file.&lt;br&gt;
Successfully parsed the configuration file.&lt;br&gt;
Check log file path.&lt;br&gt;
Successfully checked log file path.&lt;br&gt;
Checking unpreinstallation.&lt;br&gt;
Successfully checked unpreinstallation.&lt;br&gt;
Deleting Cgroup.&lt;br&gt;
Successfully deleted Cgroup.&lt;br&gt;
Deleting the instance's directory.&lt;br&gt;
Successfully deleted the instance's directory.&lt;br&gt;
Deleting the installation directory.&lt;br&gt;
Successfully deleted the installation directory.&lt;br&gt;
Deleting the temporary directory.&lt;br&gt;
Successfully deleted the temporary directory.&lt;br&gt;
Deleting remote OS user.&lt;br&gt;
Successfully deleted remote OS user.&lt;br&gt;
Deleting software packages and environmental variables of other nodes.&lt;br&gt;
Successfully deleted software packages and environmental variables of other nodes.&lt;br&gt;
Deleting logs of other nodes.&lt;br&gt;
Successfully deleted logs of other nodes.&lt;br&gt;
Deleting software packages and environmental variables of the local node.&lt;br&gt;
Successfully deleted software packages and environmental variables of the local nodes.&lt;br&gt;
Deleting local OS user.&lt;br&gt;
Successfully deleted local OS user.&lt;br&gt;
Deleting local node's logs.&lt;br&gt;
Successfully deleted local node's logs.&lt;br&gt;
Successfully cleaned environment.&lt;br&gt;
Troubleshooting&lt;/p&gt;

&lt;p&gt;If the one-click environment cleanup fails, troubleshoot the error according to the log information in "$GAUSSLOG/om/gs_postuninstall-YYYY-MM-DD_HHMMSS.log".&lt;/p&gt;

</description>
      <category>opengauss</category>
    </item>
    <item>
      <title>The upgrade impact and upgrade constraints of the openGauss database</title>
      <dc:creator>leo</dc:creator>
      <pubDate>Tue, 28 Mar 2023 03:22:38 +0000</pubDate>
      <link>https://dev.to/490583523leo/the-upgrade-impact-and-upgrade-constraints-of-the-opengauss-database-1jgb</link>
      <guid>https://dev.to/490583523leo/the-upgrade-impact-and-upgrade-constraints-of-the-opengauss-database-1jgb</guid>
      <description>&lt;p&gt;Escalation Impact and Escalation Constraints&lt;br&gt;
Note the following items during the upgrade process.&lt;/p&gt;

&lt;p&gt;The upgrade operation cannot be performed at the same time as capacity expansion and contraction.&lt;/p&gt;

&lt;p&gt;Virtual IPs are not supported.&lt;/p&gt;

&lt;p&gt;During the upgrade process, the values ​​of the four GUC parameters, wal_level, max_connections, max_prepared_transactions, and max_locks_per_transaction, are not allowed to be modified. If modified, it will cause the instance to start abnormally after rollback.&lt;/p&gt;

&lt;p&gt;It is recommended to upgrade the database system when it is idle, and try to avoid the busy time period (you can judge according to experience, such as holidays, etc.).&lt;/p&gt;

&lt;p&gt;Ensure that the database is normal as much as possible before upgrading. It can be queried through gs_om -t status. The cluster_state of the query result is Normal, which means the database is normal.&lt;/p&gt;

&lt;p&gt;Before the upgrade, ensure that the database mutual trust is normal. You can connect to another node through the ssh hostname command on any node for verification. If there is no need to enter a password for the interconnection between the machines, it means that the mutual trust is normal (usually when the database status is normal, the mutual trust is generally normal).&lt;/p&gt;

&lt;p&gt;Before and after the upgrade, the deployment method (configuration file) of the database cannot be changed. The deployment method will be verified before the upgrade, and an error will be reported if it is changed.&lt;/p&gt;

&lt;p&gt;Before upgrading, ensure that the operating system is in a healthy state. The gs_checkos tool can be used to complete the operating system status check.&lt;/p&gt;

&lt;p&gt;The in-place upgrade needs to stop the business, while the grayscale upgrade supports full-service operation.&lt;/p&gt;

&lt;p&gt;The database is running normally and the data of the primary DN is fully synchronized to the standby DN.&lt;/p&gt;

&lt;p&gt;The kerberos switch is not allowed to be turned on during the upgrade.&lt;/p&gt;

&lt;p&gt;Please do not modify the version.cfg file extracted from the installation package.&lt;/p&gt;

&lt;p&gt;If an exception occurs during the upgrade process and the upgrade fails, the user needs to manually roll back, and the next upgrade can only be performed after the rollback is successful.&lt;/p&gt;

&lt;p&gt;If the upgrade rolls back successfully and the upgrade is successful again, the GUC parameters set in the uncommitted stage will become invalid.&lt;/p&gt;

&lt;p&gt;Please do not manually set GUC parameters during the upgrade process.&lt;/p&gt;

&lt;p&gt;During the grayscale upgrade, the service interruption will not exceed 10s during the upgrade.&lt;/p&gt;

&lt;p&gt;During the upgrade process, the om operation must be performed only when the kernel version is consistent with the om version. The consistency here means that both the kernel code and the om code come from the same software package. If the pre-script of the upgrade package is executed but not upgraded, or the pre-script of the baseline package is not executed after the upgrade is rolled back, it will cause inconsistency between the kernel code and the om code.&lt;/p&gt;

&lt;p&gt;If new fields are added to the system table during the upgrade, \dthese new fields cannot be viewed through commands after the upgrade. selectAt this time , these newly added fields can be found through the command.&lt;/p&gt;

&lt;p&gt;The upgrade requires the guc parameter enable_stream_replication=on, and the upgrade is not allowed when the parameter is off.&lt;/p&gt;

&lt;p&gt;In the gray scale upgrade, the business concurrency should be less than 200 concurrent reads plus 200 concurrent writes.&lt;/p&gt;

&lt;p&gt;If the MOT table is used in versions earlier than openGauss2.0.0, the upgrade to openGauss2.0.0 is not supported.&lt;/p&gt;

&lt;p&gt;During the upgrade process, please do not install other opengGauss database clusters on the current machine.&lt;/p&gt;

&lt;p&gt;During the upgrade, the template0 library will be connected, and an error will be reported when executing create database during the upgrade.&lt;/p&gt;

&lt;p&gt;The openGauss shared storage mode does not support version upgrades.&lt;/p&gt;

&lt;p&gt;PL/Java Upgrade Constraints&lt;/p&gt;

&lt;p&gt;When upgrading from 3.0.0 and earlier versions to 3.1.0 and later versions, if the business uses the PL/Java function and the java environment does not exist on the machine where the database instance is located, the pre-upgrade check will fail. Therefore, it is necessary to confirm in advance whether to use the PL/Java function and the current JAVA version. The check method is:&lt;/p&gt;

&lt;p&gt;Execute the "select count(1) from pg_proc where prolang = 15;" command in the database using the initialization user.&lt;/p&gt;

&lt;p&gt;If the result is &amp;gt; 0, it means that the database uses PL/Java. Refer to 2 to further check whether there is a Java environment.&lt;br&gt;
If result = 0, the database is not using PL/Java. Then end this verification and execute other verification processes.&lt;br&gt;
Execute the "java -version" command as user root on the operating system.&lt;/p&gt;

&lt;p&gt;java -version&lt;br&gt;
If Java exists, and the version is equal to or higher than JDK1.8, then end this verification and perform other verification processes.&lt;br&gt;
If Java does not exist or the version is lower than JDK1.8, you need to refer to 3 to download JDK and configure Java environment variables.&lt;br&gt;
Download the JDK and configure the Java environment variables.&lt;/p&gt;

&lt;p&gt;You can download it from the official website or use this link: &lt;a href="https://www.hikunpeng.com/zh/developer/devkit/compiler/jdk"&gt;https://www.hikunpeng.com/zh/developer/devkit/compiler/jdk&lt;/a&gt;, and configure environment variables in the following way:&lt;/p&gt;

&lt;p&gt;export JAVA_HOME=/xxx/jdk1.xxx&lt;br&gt;
export PATH=$JAVA_HOME/bin:$PATH &lt;br&gt;
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar &lt;br&gt;
illustrate:&lt;/p&gt;

&lt;p&gt;The JDK directory and version need to be replaced according to the actual path and version number.&lt;/p&gt;

&lt;p&gt;The upgrade check only verifies the Java environment variables of the node that executes the upgrade command. If other nodes also need to use PL/Java, please download the JDK and configure the Java environment variables simultaneously, otherwise PL/Java will not be available.&lt;/p&gt;

</description>
      <category>opengauss</category>
    </item>
  </channel>
</rss>
