<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Apache ShardingSphere</title>
    <description>The latest articles on DEV Community by Apache ShardingSphere (@apache_shardingsphere).</description>
    <link>https://dev.to/apache_shardingsphere</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/apache_shardingsphere"/>
    <language>en</language>
    <item>
      <title>Fuzzy query for CipherColumn | ShardingSphere 5.3.0 Deep Dive</title>
      <dc:creator>Apache ShardingSphere</dc:creator>
      <pubDate>Thu, 29 Dec 2022 02:30:01 +0000</pubDate>
      <link>https://dev.to/apache_shardingsphere/fuzzy-query-for-ciphercolumn-shardingsphere-530-deep-dive-55ja</link>
      <guid>https://dev.to/apache_shardingsphere/fuzzy-query-for-ciphercolumn-shardingsphere-530-deep-dive-55ja</guid>
      <description>&lt;h2&gt;
  
  
  1. Background
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://shardingsphere.apache.org/" rel="noopener noreferrer"&gt;Apache ShardingSphere&lt;/a&gt; supports data encryption. By parsing users’ SQL input and rewriting the SQL according to the users’ encryption rules, the original data is encrypted and stored with ciphertext data in the underlying database at the same time.&lt;/p&gt;

&lt;p&gt;When a user queries the data, it only fetches the ciphertext data from the database, decrypts it, and finally returns the decrypted original data to the user. However, because the encryption algorithm encrypts the whole string, fuzzy queries cannot be achieved.&lt;/p&gt;

&lt;p&gt;Nevertheless, many businesses still need fuzzy queries after the data is encrypted. &lt;a href="https://medium.com/faun/shardingsphere-5-3-0-is-released-new-features-and-improvements-bf4d1c43b09b?source=your_stories_page-------------------------------------" rel="noopener noreferrer"&gt;In version 5.3.0&lt;/a&gt;, Apache ShardingSphere provides users with a default fuzzy query algorithm, supporting the fuzzy query for encrypted fields. The algorithm also supports hot plugging, which can be customized by users, and the fuzzy query can be achieved through configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. How to achieve fuzzy query in encrypted scenarios?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  2.1 Load data to the in-memory database (IMDB)
&lt;/h3&gt;

&lt;p&gt;Load all the data into the IMDB to decrypt it; then it’ll be like querying the original data. This method can achieve fuzzy queries. If the amount of data is small, this method will prove to be simple and cost-effective, while on the other hand, if the amount of data is large, it’ll turn out to be a disaster.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.2 Implement encryption &amp;amp; decryption functions consistent with database programs
&lt;/h3&gt;

&lt;p&gt;The second method is to modify fuzzy query conditions and use the database decryption function to decrypt data first and then implement fuzzy query. This method’s advantage is the low implementation &amp;amp; development cost, as well as use cost.&lt;/p&gt;

&lt;p&gt;Users only need to slightly modify the previous fuzzy query conditions. However, the ciphertext and encryption functions are stored together in the database, which cannot cope with the problem of account data leaks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Native SQL: select * from user where name like "%xxx%" 
After implementing the decryption function: ѕеlесt * frоm uѕеr whеrе dесоdе(namе) lіkе "%ххх%"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2.3 Store after data masking
&lt;/h3&gt;

&lt;p&gt;Implement data masking on ciphertext and then store it in a fuzzy query column. This method could lack in terms of precision.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;For example, mobile number 13012345678 becomes 130****5678 after the masking algorithm is performed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2.4 Perform encrypted storage after tokenization and combination
&lt;/h3&gt;

&lt;p&gt;This method performs tokenization and combination on ciphertext data and then encrypts the resultset by grouping characters with fixed length and splitting a field into multiple ones. For example, we take four English characters and two Chinese characters as a query condition:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ningyu1&lt;/code&gt; uses the 4-character as a group to encrypt, so the first group is &lt;code&gt;ning&lt;/code&gt;, the second group &lt;code&gt;ingy&lt;/code&gt;, the third group &lt;code&gt;ngyu&lt;/code&gt;, the fourth group &lt;code&gt;gyu1&lt;/code&gt;, and so on. All the characters are encrypted and stored in the fuzzy query column. If you want to retrieve all data that contains four characters, such as &lt;code&gt;ingy&lt;/code&gt;, encrypt the characters and use a key &lt;code&gt;like"%partial%"&lt;/code&gt; to query.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shortcomings:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Increased storage costs: free grouping will increase the amount of data and the data length will increase after being encrypted.&lt;/li&gt;
&lt;li&gt;Limited length in fuzzy query: due to security issues, the length of free grouping cannot be too short, otherwise it will be easily cracked by the &lt;a href="https://www.techtarget.com/whatis/definition/rainbow-table" rel="noopener noreferrer"&gt;rainbow table&lt;/a&gt;. Like the example I mentioned above, the length of fuzzy query characters must be greater than or equal to 4 letters/digits, or 2 Chinese characters.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2.5 Single-character digest algorithm (default fuzzy query algorithm provided in ShardingSphere &lt;a href="https://medium.com/faun/shardingsphere-5-3-0-is-released-new-features-and-improvements-bf4d1c43b09b?source=your_stories_page-------------------------------------" rel="noopener noreferrer"&gt;version 5.3.0&lt;/a&gt;)
&lt;/h3&gt;

&lt;p&gt;Although the above methods are all viable, it’s only natural to wonder if there’s a better alternative out there. In our community, we find that single-character encryption and storage can balance both performance and query, but fails to meet security requirements.&lt;/p&gt;

&lt;p&gt;Then what’s the ideal solution? Inspired by masking algorithms and cryptographic hash functions, we find that data loss and one-way functions can be used.&lt;/p&gt;

&lt;p&gt;The cryptographic hash function should have the following four features:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;For any given message, it should be easy to calculate the hash value.&lt;/li&gt;
&lt;li&gt;It should be difficult to infer the original message from a known hash value.&lt;/li&gt;
&lt;li&gt;It should not be feasible to modify the message without changing the hash value.&lt;/li&gt;
&lt;li&gt;There should only be a very low chance that two different messages produce the same hash value.
&lt;strong&gt;Security:&lt;/strong&gt; because of the one-way function, it’s not possible to infer the original message. In order to improve the accuracy of the fuzzy query, we want to encrypt a single character, but it will be cracked by the rainbow table.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So we take a one-way function (to make sure every character is the same after encryption) and increase the frequency of collisions (to make sure every string is 1: N backward), which greatly enhances security.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Fuzzy query algorithm
&lt;/h2&gt;

&lt;p&gt;Apache ShardingSphere implements a universal fuzzy query algorithm by using the below single-character digest algorithm &lt;code&gt;org.apache.shardingsphere.encrypt.algorithm.like.CharDigestLikeEncryptAlgorithm&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public final class CharDigestLikeEncryptAlgorithm implements LikeEncryptAlgorithm&amp;lt;Object, String&amp;gt; {

    private static final String DELTA = "delta";

    private static final String MASK = "mask";

    private static final String START = "start";

    private static final String DICT = "dict";

    private static final int DEFAULT_DELTA = 1;

    private static final int DEFAULT_MASK = 0b1111_0111_1101;

    private static final int DEFAULT_START = 0x4e00;

    private static final int MAX_NUMERIC_LETTER_CHAR = 255;

    @Getter
    private Properties props;

    private int delta;

    private int mask;

    private int start;

    private Map&amp;lt;Character, Integer&amp;gt; charIndexes;

    @Override
    public void init(final Properties props) {
        this.props = props;
        delta = createDelta(props);
        mask = createMask(props);
        start = createStart(props);
        charIndexes = createCharIndexes(props);
    }

    private int createDelta(final Properties props) {
        if (props.containsKey(DELTA)) {
            String delta = props.getProperty(DELTA);
            try {
                return Integer.parseInt(delta);
            } catch (NumberFormatException ex) {
                throw new EncryptAlgorithmInitializationException("CHAR_DIGEST_LIKE", "delta can only be a decimal number");
            }
        }
        return DEFAULT_DELTA;
    }

    private int createMask(final Properties props) {
        if (props.containsKey(MASK)) {
            String mask = props.getProperty(MASK);
            try {
                return Integer.parseInt(mask);
            } catch (NumberFormatException ex) {
                throw new EncryptAlgorithmInitializationException("CHAR_DIGEST_LIKE", "mask can only be a decimal number");
            }
        }
        return DEFAULT_MASK;
    }

    private int createStart(final Properties props) {
        if (props.containsKey(START)) {
            String start = props.getProperty(START);
            try {
                return Integer.parseInt(start);
            } catch (NumberFormatException ex) {
                throw new EncryptAlgorithmInitializationException("CHAR_DIGEST_LIKE", "start can only be a decimal number");
            }
        }
        return DEFAULT_START;
    }

    private Map&amp;lt;Character, Integer&amp;gt; createCharIndexes(final Properties props) {
        String dictContent = props.containsKey(DICT) &amp;amp;&amp;amp; !Strings.isNullOrEmpty(props.getProperty(DICT)) ? props.getProperty(DICT) : initDefaultDict();
        Map&amp;lt;Character, Integer&amp;gt; result = new HashMap&amp;lt;&amp;gt;(dictContent.length(), 1);
        for (int index = 0; index &amp;lt; dictContent.length(); index++) {
            result.put(dictContent.charAt(index), index);
        }
        return result;
    }

    @SneakyThrows
    private String initDefaultDict() {
        InputStream inputStream = CharDigestLikeEncryptAlgorithm.class.getClassLoader().getResourceAsStream("algorithm/like/common_chinese_character.dict");
        LineProcessor&amp;lt;String&amp;gt; lineProcessor = new LineProcessor&amp;lt;String&amp;gt;() {

            private final StringBuilder builder = new StringBuilder();

            @Override
            public boolean processLine(final String line) {
                if (line.startsWith("#") || 0 == line.length()) {
                    return true;
                } else {
                    builder.append(line);
                    return false;
                }
            }

            @Override
            public String getResult() {
                return builder.toString();
            }
        };
        return CharStreams.readLines(new InputStreamReader(inputStream, Charsets.UTF_8), lineProcessor);
    }

    @Override
    public String encrypt(final Object plainValue, final EncryptContext encryptContext) {
        return null == plainValue ? null : digest(String.valueOf(plainValue));
    }

    private String digest(final String plainValue) {
        StringBuilder result = new StringBuilder(plainValue.length());
        for (char each : plainValue.toCharArray()) {
            char maskedChar = getMaskedChar(each);
            if ('%' == maskedChar) {
                result.append(each);
            } else {
                result.append(maskedChar);
            }
        }
        return result.toString();
    }

    private char getMaskedChar(final char originalChar) {
        if ('%' == originalChar) {
            return originalChar;
        }
        if (originalChar &amp;lt;= MAX_NUMERIC_LETTER_CHAR) {
            return (char) ((originalChar + delta) &amp;amp; mask);
        }
        if (charIndexes.containsKey(originalChar)) {
            return (char) (((charIndexes.get(originalChar) + delta) &amp;amp; mask) + start);
        }
        return (char) (((originalChar + delta) &amp;amp; mask) + start);
    }

    @Override
    public String getType() {
        return "CHAR_DIGEST_LIKE";
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Define the binary &lt;code&gt;mask&lt;/code&gt; code to lose precision &lt;code&gt;0b1111_0111_1101&lt;/code&gt; (mask).&lt;/li&gt;
&lt;li&gt;Save common Chinese characters with disrupted order like a &lt;code&gt;map&lt;/code&gt; dictionary.&lt;/li&gt;
&lt;li&gt;Obtain a single string of &lt;code&gt;Unicode&lt;/code&gt; for digits, English, and Latin.&lt;/li&gt;
&lt;li&gt;Obtain &lt;code&gt;index&lt;/code&gt; for a Chinese character belonging to a dictionary.&lt;/li&gt;
&lt;li&gt;Other characters fetch the &lt;code&gt;Unicode&lt;/code&gt; of a single string.&lt;/li&gt;
&lt;li&gt;Add &lt;code&gt;1 (delta)&lt;/code&gt; to the digits obtained by different types above to prevent any original text from appearing in the database.&lt;/li&gt;
&lt;li&gt;Then convert the offset &lt;code&gt;Unicode&lt;/code&gt; into binary and perform the &lt;code&gt;AND&lt;/code&gt; operation with &lt;code&gt;mask&lt;/code&gt;, and carry out a 2-bit digit loss.&lt;/li&gt;
&lt;li&gt;Directly output digits, English, and Latin after the loss of precision.&lt;/li&gt;
&lt;li&gt;The remaining characters are converted to decimal and output with the common character &lt;code&gt;start&lt;/code&gt; code after the loss of precision.
## 4. The fuzzy algorithm development progress
### 4.1 The first edition
Simply use &lt;code&gt;Unicode&lt;/code&gt; and &lt;code&gt;mask&lt;/code&gt; code of common characters to perform the &lt;code&gt;AND&lt;/code&gt; operation.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Mask: 0b11111111111001111101
The original character: 0b1000101110101111讯
After encryption: 0b1000101000101101設
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Assuming we know the key and encryption algorithm, the original string after a backward pass is:
1.0b1000101100101101 謭
2.0b1000101100101111 謯
3.0b1000101110101101 训
4.0b1000101110101111 讯
5.0b1000101010101101 読
6.0b1000101010101111 誯
7.0b1000101000101111 訯
8.0b1000101000101101 設
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We find that based on the missing bits, each string can be derived &lt;code&gt;2^n&lt;/code&gt; Chinese characters backward. When the &lt;code&gt;Unicode&lt;/code&gt; of common Chinese characters is decimal, their intervals are very large. Notice that the Chinese characters inferred backward are not common characters, and it's more likely to infer the original characters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh97qfn7u3lkug9lch1p.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh97qfn7u3lkug9lch1p.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4.2 The second edition
&lt;/h3&gt;

&lt;p&gt;Since the interval of common Chinese characters &lt;code&gt;Unicode&lt;/code&gt; is irregular, we planned to leave the last few bits of Chinese characters &lt;code&gt;Unicode&lt;/code&gt; and convert them into decimal as &lt;code&gt;index&lt;/code&gt; to fetch some common Chinese characters. This way, when the algorithm is known, uncommon characters won't appear after a backward pass, and distractors are no longer easy to eliminate.&lt;/p&gt;

&lt;p&gt;If we leave the last few bits of Chinese characters &lt;code&gt;Unicode&lt;/code&gt;, it has something to do with the relationship between the accuracy of fuzzy query and anti-decryption complexity. The higher the accuracy, the lower the decryption difficulty.&lt;/p&gt;

&lt;p&gt;Let’s take a look at the collision degree of common Chinese characters under our algorithm:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When &lt;code&gt;mask&lt;/code&gt;=0b0011_1111_1111:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbh50dvwm6y3usddlwdu.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbh50dvwm6y3usddlwdu.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When &lt;code&gt;mask&lt;/code&gt;=0b0001_1111_1111:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fzfxgji033y71zug0nx.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fzfxgji033y71zug0nx.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the mantissa of Chinese characters, leave 10 and 9 digits. The 10-digit query is more accurate because its collision is much weaker. Nevertheless, if the algorithm and the key are known, the original text of the 1:1 character can be derived backward.&lt;/p&gt;

&lt;p&gt;The 9-digit query is less accurate because 9-digit collisions are relatively stronger, but there are fewer 1:1 characters. We find that although we change the collisions regardless of whether we leave 10 or 9 digits, the distribution is very unbalanced due to the irregular Unicode of Chinese characters, and the overall collision probability cannot be controlled.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.3 The third edition
&lt;/h3&gt;

&lt;p&gt;In response to the unevenly distributed problem found in the second edition, we take common characters with disrupted order as the dictionary table.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The encrypted text first looks up the &lt;code&gt;index&lt;/code&gt; in the out-of-order dictionary table. We use the &lt;code&gt;index&lt;/code&gt; and subscript to replace the &lt;code&gt;Unicode&lt;/code&gt; without rules.&lt;br&gt;
Use &lt;code&gt;Unicode&lt;/code&gt; in case of uncommon characters. (Note: evenly distribute the code to be calculated as far as possible.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The next step is to perform the &lt;code&gt;AND&lt;/code&gt; operation with &lt;code&gt;mask&lt;/code&gt; and lose 2-bit precision to increase the frequency of collisions.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s take a look at the collision degree of common Chinese characters under our algorithm:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When &lt;code&gt;mask&lt;/code&gt;=0b1111_1011_1101:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg06jetszq8irbl15dpm5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg06jetszq8irbl15dpm5.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When &lt;code&gt;mask&lt;/code&gt;=0b0111_1011_1101:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flw87aomg98b34ww7o3uz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flw87aomg98b34ww7o3uz.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the &lt;code&gt;mask&lt;/code&gt; leaves 11 bits, you can see that the collision distribution is concentrated at 1:4. When &lt;code&gt;mask&lt;/code&gt; leaves 10 bits, the number becomes 1:8. At this time, we only need to adjust the number of precision losses to control whether the collision is 1:2, 1:4 or 1:8.&lt;/p&gt;

&lt;p&gt;If &lt;code&gt;mask&lt;/code&gt; is selected as 1, and the algorithm and key are known, there will be a 1:1 Chinese character, because what we calculate at this time is the collision degree of common characters. If we add the missing 4 bits before the 16-bit binary of Chinese characters, the situation becomes &lt;code&gt;2^5=32&lt;/code&gt; cases.&lt;/p&gt;

&lt;p&gt;Since we encrypted the whole text, even if the individual character is inferred backwards, there will be little impact on overall security, and it will not cause mass data leaks. At the same time, the premise of backward pass is to know the algorithm, key, &lt;code&gt;delta&lt;/code&gt; and dictionary, so it's impossible to achieve from the data in the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. How to use fuzzy query
&lt;/h2&gt;

&lt;p&gt;Fuzzy query requires the configuration of &lt;code&gt;encryptors&lt;/code&gt;(encryption algorithm configuration), &lt;code&gt;likeQueryColumn&lt;/code&gt; (fuzzy query column name), and &lt;code&gt;likeQueryEncryptorName&lt;/code&gt;(encryption algorithm name of fuzzy query column ) in the encryption configuration.&lt;/p&gt;

&lt;p&gt;Please refer to the following configuration. Add your own sharding algorithm and data source.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dataSources:
  ds_0:
    dataSourceClassName: com.zaxxer.hikari.HikariDataSource
    driverClassName: com.mysql.jdbc.Driver
    jdbcUrl: jdbc:mysql://127.0.0.1:3306/test?allowPublicKeyRetrieval=true
    username: root
    password: root

rules:
- !ENCRYPT
  encryptors:
    like_encryptor:
      type: CHAR_DIGEST_LIKE
    aes_encryptor:
      type: AES
      props:
        aes-key-value: 123456abc
  tables:
    user:
      columns:
        name:
          cipherColumn: name
          encryptorName: aes_encryptor
          assistedQueryColumn: name_ext
          assistedQueryEncryptorName: aes_encryptor
          likeQueryColumn: name_like
          likeQueryEncryptorName: like_encryptor
        phone:
          cipherColumn: phone
          encryptorName: aes_encryptor
          likeQueryColumn: phone_like
          likeQueryEncryptorName: like_encryptor
  queryWithCipherColumn: true


props:
  sql-show: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Insert&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Logic SQL: insert into user ( id, name, phone, sex) values ( 1, '熊高祥', '13012345678', '男')
Actual SQL: ds_0 ::: insert into user ( id, name, name_ext, name_like, phone, phone_like, sex) values (1, 'gyVPLyhIzDIZaWDwTl3n4g==', 'gyVPLyhIzDIZaWDwTl3n4g==', '佹堝偀', 'qEmE7xRzW0d7EotlOAt6ww==', '04101454589', '男')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Logic SQL: update user set name = '熊高祥123', sex = '男1' where sex ='男' and phone like '130%'
Actual SQL: ds_0 ::: update user set name = 'K22HjufsPPy4rrf4PD046A==', name_ext = 'K22HjufsPPy4rrf4PD046A==', name_like = '佹堝偀014', sex = '男1' where sex ='男' and phone_like like '041%'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Select&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Logic SQL: select * from user where (id = 1 or phone = '13012345678') and name like '熊%'
Actual SQL: ds_0 ::: select `user`.`id`, `user`.`name` AS `name`, `user`.`sex`, `user`.`phone` AS `phone`, `user`.`create_time` from user where (id = 1 or phone = 'qEmE7xRzW0d7EotlOAt6ww==') and name_like like '佹%'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Select: federated table sub-query&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Logic SQL: select * from user LEFT JOIN user_ext on user.id=user_ext.id where user.id in (select id from user where sex = '男' and name like '熊%')
Actual SQL: ds_0 ::: select `user`.`id`, `user`.`name` AS `name`, `user`.`sex`, `user`.`phone` AS `phone`, `user`.`create_time`, `user_ext`.`id`, `user_ext`.`address` from user LEFT JOIN user_ext on user.id=user_ext.id where user.id in (select id from user where sex = '男' and name_like like '佹%')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Delete&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Logic SQL: delete from user where sex = '男' and name like '熊%'
Actual SQL: ds_0 ::: delete from user where sex = '男' and name_like like '佹%'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above example demonstrates how fuzzy query columns rewrite SQL in different SQL syntaxes to support fuzzy queries.&lt;/p&gt;

&lt;p&gt;This blog post introduced you to the working principles of fuzzy query and used specific examples to demonstrate how to use it. We hope that through this article, you will have gained a basic understanding of fuzzy queries.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;p&gt;🔗 &lt;a href="https://shardingsphere.apache.org/document/current/en/downloads/" rel="noopener noreferrer"&gt;Download Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://shardingsphere.apache.org/" rel="noopener noreferrer"&gt;Project Address&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/apache/shardingsphere-on-cloud" rel="noopener noreferrer"&gt;ShardingSphere-on-Cloud&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Author
&lt;/h2&gt;

&lt;p&gt;Xiong Gaoxiang, an engineer at &lt;a href="https://global.iflytek.com/" rel="noopener noreferrer"&gt;Iflytek&lt;/a&gt; and a ShardingSphere Contributor, is responsible for the data encryption and data masking R&amp;amp;D.&lt;/p&gt;

</description>
      <category>database</category>
      <category>java</category>
      <category>opensource</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Use AWS CloudFormation to create ShardingSphere HA clusters</title>
      <dc:creator>Apache ShardingSphere</dc:creator>
      <pubDate>Thu, 15 Dec 2022 08:37:52 +0000</pubDate>
      <link>https://dev.to/apache_shardingsphere/use-aws-cloudformation-to-create-shardingsphere-ha-clusters-1d02</link>
      <guid>https://dev.to/apache_shardingsphere/use-aws-cloudformation-to-create-shardingsphere-ha-clusters-1d02</guid>
      <description>&lt;h2&gt;
  
  
  What’s AWS CloudFormation?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/cloudformation/" rel="noopener noreferrer"&gt;AWS CloudFormation&lt;/a&gt; is an infrastructure as code (IaC) service that allows you to easily model, provision, and manage all the cloud&lt;br&gt;
services provided by &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS&lt;/a&gt; by custom templates.&lt;/p&gt;

&lt;p&gt;Traditionally, when we create a new architecture, we need to manually configure it in AWS step by step which can cause some errors such forgetting some steps.&lt;/p&gt;

&lt;p&gt;Now with CloudFormation, we can use declarative configuration to define resources and then create and manage them without worrying about the dependency order of the resources.&lt;/p&gt;

&lt;p&gt;In real-world scenarios, we often need to create duplicate architectures. For example, we can build a set of MQ clusters in a pre-release environment whose architecture and production environment are consistent, and create an &lt;code&gt;AutoScalingGroup&lt;/code&gt; with the same configuration in each availability zone (AZ).&lt;/p&gt;

&lt;p&gt;Through CloudFormation, those repeated architectures can then be expressed in the form of code and stored in the code repository for management. This way, they can be easily integrated into existing CI/CD pipelines to achieve the change of infrastructure, in accordance with the current DevOps process. As a result, the infrastructure change is more transparent, repeatable, testable and auditable, simplifying the management of cloud systems.&lt;/p&gt;
&lt;h2&gt;
  
  
  What’s Apache ShardingSphere?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://shardingsphere.apache.org/" rel="noopener noreferrer"&gt;Apache ShardingSphere&lt;/a&gt; is a distributed database ecosystem that can transform any database into a distributed database system, and enhance it with sharding, elastic scaling, encryption features &amp;amp; more.&lt;/p&gt;

&lt;p&gt;Apache ShardingSphere follows the &lt;a href="https://medium.com/faun/whats-the-database-plus-concepand-what-challenges-can-it-solve-715920ba65aa?source=your_stories_page-------------------------------------" rel="noopener noreferrer"&gt;Database Plus&lt;/a&gt; concept, designed to build an ecosystem on top of fragmented heterogeneous databases. It focuses on how to fully use the computing and storage capabilities of databases rather than creating a brand-new database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://shardingsphere.apache.org/document/current/en/overview/" rel="noopener noreferrer"&gt;ShardingSphere-Proxy&lt;/a&gt; is a transparent database proxy that supports any client that uses &lt;a href="https://www.mysql.com/" rel="noopener noreferrer"&gt;MySQL&lt;/a&gt;, &lt;a href="https://www.postgresql.org/" rel="noopener noreferrer"&gt;PostgreSQL&lt;/a&gt; and &lt;a href="https://opengauss.org/en/" rel="noopener noreferrer"&gt;openGauss&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Deploy with CloudFormation
&lt;/h2&gt;

&lt;p&gt;As an important part of the data infrastructure, the ShardingSphere Proxy cluster is significant for high availability. For these reasons, our community hopes that you can manage it through IaC to enjoy the benefits brought by IaC.&lt;/p&gt;

&lt;p&gt;In the following sections we’ll use CloudFormation to create a ShardingSphere-Proxy cluster in multi-AZ. Before creating CloudFormation templates, we have to understand the architecture diagram of a ShardingSphere-Proxy cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5dnbkwmf6ztqr6xv9hf1.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5dnbkwmf6ztqr6xv9hf1.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please note that we use &lt;a href="https://zookeeper.apache.org/" rel="noopener noreferrer"&gt;Zookeeper&lt;/a&gt; as the Governance Center.&lt;/p&gt;

&lt;p&gt;As you can see, ShardingSphere-Proxy itself is a stateless application. In production scenarios, it only needs to offer a LoadBalancer, which can flexibly distribute the traffic between instances.&lt;/p&gt;

&lt;p&gt;To ensure the HA of the ZooKeeper cluster and the ShardingSphere-Proxy cluster, we use the following architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6sqflt39qoj3tfxleeu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6sqflt39qoj3tfxleeu.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Define CloudFormation Parameters
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/cloudformation/resources/templates/?nc1=h_ls" rel="noopener noreferrer"&gt;The CloudFormation template&lt;/a&gt; is a yaml or json file in which you can define all of the infrastructure. &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html" rel="noopener noreferrer"&gt;The CloudFormation parameter&lt;/a&gt; allows you to inject custom values into the templates. You can then &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html" rel="noopener noreferrer"&gt;reference&lt;/a&gt; these parameters when defining resources. Of course, we also provide default values, which can be overridden if needed.&lt;/p&gt;

&lt;p&gt;We use &lt;a href="https://github.com/cfndsl/cfndsl" rel="noopener noreferrer"&gt;cfndsl&lt;/a&gt; to write CloudFormation templates: cfndsl is a tool written in &lt;a href="https://www.ruby-lang.org/en/" rel="noopener noreferrer"&gt;ruby&lt;/a&gt;, allowing you to generate CloudFormation templates through DSL configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Parameter("ZookeeperInstanceType") {
  String
  Default "t2.nano"
}

Parameter("ShardingSphereInstanceType") {
  String
  Default "t2.micro"
}

Parameter("KeyName") {
  String
  Default "test-tf"
  Description "The ssh keypair for remote connetcion"
}

Parameter("ImageId") {
  Type "AWS::EC2::Image::Id"
  Default "ami-094bbd9e922dc515d"
}

Parameter("VpcId") {
  String
  Default "vpc-0ef2b7440d3ade8d5"
  Description "The id of your VPC"
}

....
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more parameters, please refer to this &lt;a href="https://github.com/apache/shardingsphere-on-cloud/blob/main/cloudformation/multi-az/cf.rb#L21" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Define CloudFormation Resources
&lt;/h3&gt;

&lt;p&gt;Note that AWS’s region usually includes more than two AZs, so we created three instances.&lt;/p&gt;

&lt;h4&gt;
  
  
  Zookeeper Clusters
&lt;/h4&gt;

&lt;p&gt;First, we deploy a Zookeeper EC2 instance in each AZ to ensure the HA of the ZooKeeper cluster.&lt;/p&gt;

&lt;p&gt;Then we create an internal domain name for each Zookeeper instance, and ShardingSphere-Proxy uses these domain names.&lt;/p&gt;

&lt;p&gt;When the ZooKeeer instance starts, cloud-init is used to automatically deploy the Zookeeper service. You can view the cloud-init configuration here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(0..2).each do |i| 
    name = "ZK#{i+1}"
    EC2_Instance(name) {
      AvailabilityZone FnSelect(i, FnGetAZs(Ref("AWS::Region")))
      InstanceType Ref("ZookeeperInstanceType")
      ImageId Ref("ImageId")
      KeyName Ref("KeyName")
      SubnetId FnSelect(i, Ref("Subnets"))
      SecurityGroupIds Ref("SecurityGroupIds")
      Tags [ 
        Tag do 
          Key "Name"
          Value "ZK-#{i+1}"
        end
      ]

      server = "server.%{idx}=zk-%{idx}.${HostedZoneName}:2888:3888"
      UserData FnBase64(
        FnSub(
          IO.read("./zookeeper-cloud-init.yml"), 
          :SERVERS =&amp;gt; FnSub((0..2).map{|i| i == 0 ? server %{:idx =&amp;gt; i+1} : ("#{server}" %{:idx =&amp;gt; i+1}).insert(0, " " * 4)}.join("\n")), 
          :VERSION =&amp;gt; Ref("ZookeeperVersion"),
          :ZK_HEAP =&amp;gt; Ref("ZookeeperHeap"),
          :INDEX =&amp;gt; i+1,
        )
      )
    }

    domain = "zone#{name}"
    Route53_RecordSet(domain) {
      HostedZoneId Ref("HostedZoneId")
      Name FnSub("zk-#{i+1}.${HostedZoneName}")
      Type "A"
      ResourceRecords [FnGetAtt(name, "PrivateIp")]
      TTL "60"
    }
  end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  ShardingSphere-Proxy Clusters
&lt;/h4&gt;

&lt;h5&gt;
  
  
  LaunchTemplate
&lt;/h5&gt;

&lt;p&gt;Next, we deploy an AutoScalingGroup in each AZ to ensure the HA of a ShardingSphere-Proxy cluster.&lt;/p&gt;

&lt;p&gt;Before creating the AutoScalingGroup, we need to create a LaunchTemplate in each AZ for the ShardingSphere-Proxy instance.&lt;/p&gt;

&lt;p&gt;Similarly, when the instance starts, cloud-init is used to automatically deploy the ShardingSphere-Proxy service. You can view the cloud-init configuration here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(0..2).each do |i| 
    name = "launchtemplate#{i}"
    EC2_LaunchTemplate(name) {
      LaunchTemplateName FnSub("shardingsphere-${TMPL_NAME}", :TMPL_NAME =&amp;gt; FnSelect(i, FnGetAZs(Ref('AWS::Region'))))
      LaunchTemplateData do 
        ImageId Ref("ImageId")
        InstanceType Ref("ShardingSphereInstanceType")
        KeyName Ref("KeyName")

        MetadataOptions do
          HttpEndpoint "enabled"
          HttpTokens   "required"
          InstanceMetadataTags "enabled"
        end

        Monitoring do
          Enabled  true
        end

        NetworkInterfaces [
          {
            :DeleteOnTermination =&amp;gt; false,
            :DeviceIndex =&amp;gt; 0,
            :NetworkInterfaceId =&amp;gt; FnGetAtt("networkiface#{i}", "Id")
          }
        ]

        TagSpecifications [
          {
            :ResourceType =&amp;gt; "instance",
            :Tags =&amp;gt; [
              {
                :Key =&amp;gt; "Name",
                :Value =&amp;gt; "shardingsphere-#{i+1}"
              }
            ]
          }
        ]

        UserData FnBase64(
          FnSub(
            IO.read("./shardingsphere-cloud-init.yml"), 
            :ZK_SERVERS =&amp;gt; FnSub((0..2).map{|i| "zk-#{i+1}.${HostedZoneName}:2181" }.join(",")), 
            :VERSION =&amp;gt; Ref("ShardingSphereVersion"),
            :JAVA_MEM_OPTS =&amp;gt; Ref("ShardingSphereJavaMemOpts")
          )
        )
      end
    }
  end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  TargetGroup
&lt;/h5&gt;

&lt;p&gt;As we use &lt;a href="https://aws.amazon.com/elasticloadbalancing/" rel="noopener noreferrer"&gt;ELB&lt;/a&gt; to load traffic between each ShardingSphere-Proxy, ELB should be used in combination with &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticloadbalancingv2-targetgroup.html" rel="noopener noreferrer"&gt;TargetGroup&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Instances created by AutoScalingGroup are automatically registered to TargetGroup. ELB will then forward the traffic to TargetGroup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ElasticLoadBalancingV2_TargetGroup("sslbtg") {
    Name "shardingsphere-lb-tg"
    Port Ref("ShardingSpherePort")
    Protocol "TCP"
    VpcId Ref("VpcId")
    TargetGroupAttributes [
      TargetGroupAttribute do
        Key "preserve_client_ip.enabled"
        Value "false"
      end
    ]
    Tags [
      Tag do
        Key "Name"
        Value "shardingsphere"
      end
    ]
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  AutoScalingGroup
&lt;/h5&gt;

&lt;p&gt;After creating the aforementioned resources, now we can create an AutoScalingGroup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(0..2).each do |i| 
    name = "autoscaling#{i}"
    AutoScaling_AutoScalingGroup(name) {
      AutoScalingGroupName "shardingsphere-#{i}" 
      AvailabilityZones [FnSelect(i, FnGetAZs(Ref("AWS::Region")))]
      DesiredCapacity "1"
      MaxSize "1"
      MinSize "1"
      HealthCheckGracePeriod  60
      HealthCheckType "EC2"

      TargetGroupARNs [ Ref("sslbtg")]

      LaunchTemplate do
        LaunchTemplateName  FnSub("shardingsphere-${TMPL_NAME}", :TMPL_NAME =&amp;gt; FnSelect(i, FnGetAZs(Ref('AWS::Region'))))
        Version FnGetAtt("launchtemplate#{i}", "LatestVersionNumber")
      end
    }
  end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  LoadBalancer &amp;amp; Listener
&lt;/h5&gt;

&lt;p&gt;Create an internal LoadBalancer and Listener for the external service of the ShardingSphere-Proxy cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ElasticLoadBalancingV2_LoadBalancer("ssinternallb") {
    Name "shardingsphere-internal-lb"
    Scheme "internal"
    Type "network"

    mappings = (0..2).map { |x| 
        SubnetMapping do
          SubnetId FnSelect(x, Ref("Subnets"))
        end
    }
    SubnetMappings mappings
    Tags [
      Tag do
        Key "Name"
        Value "shardingsphere"
      end
    ]
  }

  ElasticLoadBalancingV2_Listener("sslblistener") {
    Port Ref("ShardingSpherePort")
    LoadBalancerArn Ref("ssinternallb")
    Protocol "TCP"
    DefaultActions [
      {
        :Type =&amp;gt; "forward",
        :TargetGroupArn =&amp;gt; Ref("sslbtg")
      }
    ]
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Internal Domain Names
&lt;/h5&gt;

&lt;p&gt;Finally, we create the internal domain names for the external service of the ShardingSphere-Prxoy cluster. Those domain names point to the internal LoadBalancer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Route53_RecordSet("ssinternaldomain") {
    HostedZoneId Ref("HostedZoneId")
    Name FnSub("proxy.${HostedZoneName}")
    Type "A"
    AliasTarget do 
      HostedZoneId FnGetAtt("ssinternallb", "CanonicalHostedZoneID")
      DNSName FnGetAtt("ssinternallb", "DNSName")
      EvaluateTargetHealth true
    end
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deployment
&lt;/h3&gt;

&lt;p&gt;Use the command &lt;code&gt;cfndsl cf.rb -o cf.json --pretty&lt;/code&gt; to generate the final &lt;a href="https://github.com/apache/shardingsphere-on-cloud/blob/main/cloudformation/multi-az/cf.json" rel="noopener noreferrer"&gt;configuration&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Create Stack on the UI page and select the config file we generated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7tu0p74zs2v4wb1cs096.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7tu0p74zs2v4wb1cs096.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
After a few minutes, you will find that all the resources have been created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5x7dr6676ion1f1rtgkp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5x7dr6676ion1f1rtgkp.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
You can find the complete code &lt;a href="https://github.com/apache/shardingsphere-on-cloud/tree/main/cloudformation/multi-az" rel="noopener noreferrer"&gt;here&lt;/a&gt;, or visit our &lt;a href="https://shardingsphere.apache.org/oncloud/current/en/operation-guide/cloudformation-multi-az/" rel="noopener noreferrer"&gt;website&lt;/a&gt; for more information.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test
&lt;/h3&gt;

&lt;p&gt;The test is designed to ensure that the clusters we created are workable. A simple case is illustrated below.&lt;/p&gt;

&lt;p&gt;Use DistSQL (Distributed SQL) to add two data sources and create a simple sharding rule, and then insert data. We can see that the query returns correct results.&lt;/p&gt;

&lt;p&gt;By default, when we use CloudFormation, an internal domain name &lt;code&gt;proxy.shardingsphere.org&lt;/code&gt; will be created. The username and password of the ShardingSphere-Proxy cluster are both &lt;code&gt;root&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3y4sh3s1e7urvy240ez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3y4sh3s1e7urvy240ez.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: &lt;a href="https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/distsql/" rel="noopener noreferrer"&gt;DistSQL (Distributed SQL)&lt;/a&gt; is ShardingSphere’s SQL-like operating language. It’s used the same way as standard SQL, and is designed to provide incremental SQL operation capability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AWS CloudFormation is an incredibly powerful service that is really helpful for the iteration of ShardingSphere-Proxy clusters. With this new addition, it is now easier than ever to get started with Apache ShardingSphere.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>cloud</category>
      <category>architecture</category>
    </item>
    <item>
      <title>ShardingSphere 5.3.0 is released: new features and improvements</title>
      <dc:creator>Apache ShardingSphere</dc:creator>
      <pubDate>Mon, 12 Dec 2022 07:31:39 +0000</pubDate>
      <link>https://dev.to/apache_shardingsphere/shardingsphere-530-is-released-new-features-and-improvements-c50</link>
      <guid>https://dev.to/apache_shardingsphere/shardingsphere-530-is-released-new-features-and-improvements-c50</guid>
      <description>&lt;p&gt;After 1.5 months in development, &lt;a href="https://shardingsphere.apache.org/document/current/en/downloads/"&gt;Apache ShardingSphere 5.3.0&lt;/a&gt; is released. Our community merged &lt;a href="https://github.com/apache/shardingsphere/pulls?q=is%3Amerged+is%3Apr+milestone%3A5.3.0"&gt;687 PRs&lt;/a&gt; from contributors around the world.&lt;/p&gt;

&lt;p&gt;The new release has been improved in terms of features, performance, testing, documentation, examples, etc.&lt;/p&gt;

&lt;p&gt;The 5.3.0 release brings the following highlights:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Support fuzzy query for CipherColumn.&lt;/li&gt;
&lt;li&gt;Support Datasource-level heterogeneous database.&lt;/li&gt;
&lt;li&gt;Support checkpoint resume for data consistency check.&lt;/li&gt;
&lt;li&gt;Automatically start a distributed transaction, while executing DML statements across multiple shards.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, release 5.3.0 also brings the following adjustments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remove the Spring configuration.&lt;/li&gt;
&lt;li&gt;Systematically refactor the DistSQL syntax.&lt;/li&gt;
&lt;li&gt;Refactor the configuration format of ShardingSphere-Proxy.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Highlights
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1.Support fuzzy query for CipherColumn
&lt;/h3&gt;

&lt;p&gt;In previous versions, ShardingSphere’s Encrypt feature didn’t support the use of the &lt;code&gt;LIKE&lt;/code&gt; operator in SQL.&lt;/p&gt;

&lt;p&gt;For a while users strongly requested adding the &lt;code&gt;LIKE&lt;/code&gt; operator to the Encrypt feature. Usually, encrypted fields are mainly of the string type, and it is a common practice for the string to execute &lt;code&gt;LIKE&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To minimize friction in accessing the Encrypt feature, our community has initiated a discussion about the implementation of encrypted &lt;code&gt;LIKE&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Since then, we’ve received a lot of feedback.&lt;/p&gt;

&lt;p&gt;Some community members even contributed their original encryption algorithm implementation supporting fuzzy queries after fully investigating conventional solutions.&lt;/p&gt;

&lt;p&gt;🔗 &lt;strong&gt;The relevant issue can be found &lt;a href="https://github.com/apache/shardingsphere/issues/20435"&gt;here&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;strong&gt;For the algorithm design, please refer to the &lt;a href="https://github.com/apache/shardingsphere/files/9684570/default.pdf"&gt;attachment&lt;/a&gt; within the &lt;a href="https://github.com/apache/shardingsphere/files/9684570/default.pdf"&gt;issue&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The [single-character abstract algorithm] contributed by the community members is implemented as &lt;code&gt;CHAR_DIGEST_LIKE&lt;/code&gt; in the ShardingSphere encryption algorithm SPI.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Support datasource-level heterogeneous database
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://shardingsphere.apache.org/"&gt;ShardingSphere&lt;/a&gt; supports a database gateway, but its heterogeneous capability is limited to the logical database in previous versions. This means that all the data sources under a logical database must be of the same database type.&lt;/p&gt;

&lt;p&gt;This new release supports datasource-level heterogeneous databases at the kernel level. This means the datasources under a logical database can be different database types, allowing you to use different databases to store data.&lt;/p&gt;

&lt;p&gt;Combined with ShardingSphere’s SQL dialect conversion capability, this new feature significantly enhances ShardingSphere’s heterogeneous data gateway capability.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Data migration: support checkpoint resume for data consistency check
&lt;/h3&gt;

&lt;p&gt;Data consistency checks happen at the later stage of data migration.&lt;/p&gt;

&lt;p&gt;Previously, the data consistency check was triggered and stopped by &lt;a href="https://shardingsphere.apache.org/document/5.1.0/en/concepts/distsql/"&gt;DistSQL&lt;/a&gt;. If a large amount of data was migrated and the data consistency check was stopped for any reason, the check would’ve had to be started again — which is sub-optimal and affects user experience.&lt;/p&gt;

&lt;p&gt;ShardingSphere 5.3.0 now supports checkpoint storage, which means data consistency checks can be resumed from the checkpoint.&lt;/p&gt;

&lt;p&gt;For example, if data is being verified during data migration and the user stops the verification for some reason, with the verification progress &lt;code&gt;(finished_percentage)&lt;/code&gt; being 5%, then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql&amp;gt; STOP MIGRATION CHECK 'j0101395cd93b2cfc189f29958b8a0342e882';
Query OK, 0 rows affected (0.12 sec)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql&amp;gt; SHOW MIGRATION CHECK STATUS 'j0101395cd93b2cfc189f29958b8a0342e882';
+--------+--------+---------------------+-------------------+-------------------------+-------------------------+------------------+---------------+
| tables | result | finished_percentage | remaining_seconds | check_begin_time        | check_end_time          | duration_seconds | error_message |
+--------+--------+---------------------+-------------------+-------------------------+-------------------------+------------------+---------------+
| sbtest | false  | 5                   | 324               | 2022-11-10 19:27:15.919 | 2022-11-10 19:27:35.358 | 19               |               |
+--------+--------+---------------------+-------------------+-------------------------+-------------------------+------------------+---------------+
1 row in set (0.02 sec)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, the user restarts the data verification. But the work does not have to restart from the beginning. The verification progress &lt;code&gt;(finished_percentage)&lt;/code&gt; remains at 5%.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql&amp;gt; START MIGRATION CHECK 'j0101395cd93b2cfc189f29958b8a0342e882';
Query OK, 0 rows affected (0.35 sec)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql&amp;gt; SHOW MIGRATION CHECK STATUS 'j0101395cd93b2cfc189f29958b8a0342e882';
+--------+--------+---------------------+-------------------+-------------------------+----------------+------------------+---------------+
| tables | result | finished_percentage | remaining_seconds | check_begin_time        | check_end_time | duration_seconds | error_message |
+--------+--------+---------------------+-------------------+-------------------------+----------------+------------------+---------------+
| sbtest | false  | 5                   | 20                | 2022-11-10 19:28:49.422 |                | 1                |               |
+--------+--------+---------------------+-------------------+-------------------------+----------------+------------------+---------------+
1 row in set (0.02 sec)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Limitation:&lt;/strong&gt; this new feature is unavailable with the &lt;code&gt;CRC32_MATCH&lt;/code&gt; algorithm because the algorithm calculates all data at once.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Automatically start a distributed transaction while executing DML statements across multiple shards
&lt;/h3&gt;

&lt;p&gt;Previously, even with XA and other distributed transactions configured, ShardingSphere could not guarantee the atomicity of DML statements that are routed to multiple shards — if users didn’t manually enable the transaction.&lt;/p&gt;

&lt;p&gt;Take the following SQL as an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;insert into account(id, balance, transaction_id) values
(1, 1, 1),(2, 2, 2),(3, 3, 3),(4, 4, 4),
(5, 5, 5),(6, 6, 6),(7, 7, 7),(8, 8, 8);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When this SQL is sharded according to &lt;code&gt;id mod 2&lt;/code&gt;, the ShardingSphere kernel layer will automatically split it into the following two SQLs and route them to different shards respectively for execution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;insert into account(id, balance, transaction_id) values
(1, 1, 1),(3, 3, 3),(5, 5, 5),(7, 7, 7);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;insert into account(id, balance, transaction_id) values
(2, 2, 2),(4, 4, 4),(6, 6, 6),(8, 8, 8);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the user does not manually start the transaction, and one of the sharded SQL fails to execute, the atomicity cannot be guaranteed because the successful operation cannot be rolled back.&lt;/p&gt;

&lt;p&gt;ShardingSphere 5.3.0 is optimized in terms of distributed transactions. If distributed transactions are configured in ShardingSphere, they can be automatically started when DML statements are routed to multiple shards. This way, we can ensure atomicity when executing DML statements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Significant Adjustments
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1.Remove Spring configuration
&lt;/h3&gt;

&lt;p&gt;In earlier versions, &lt;a href="https://shardingsphere.apache.org/document/current/en/overview/#shardingsphere-jdbc"&gt;ShardingSphere-JDBC&lt;/a&gt; provided services in the format of DataSource. If you wanted to introduce ShardingSphere-JDBC without modifying the code in the &lt;a href="https://spring.io/"&gt;Spring&lt;/a&gt;/&lt;a href="https://spring.io/projects/spring-boot"&gt;Spring Boot&lt;/a&gt; project, you needed to use modules such as Spring/&lt;a href="https://www.javatpoint.com/spring-boot-starters"&gt;Spring Boot Starter&lt;/a&gt; provided by ShardingSphere.&lt;/p&gt;

&lt;p&gt;Although ShardingSphere supports multiple configuration formats, it also has the following problems:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When API changes, many config files need to be adjusted, which is a heavy workload.&lt;/li&gt;
&lt;li&gt;The community has to maintain multiple config files.&lt;/li&gt;
&lt;li&gt;The lifecycle management of Spring bean is susceptible to other dependencies of the project such as PostProcessor failure.&lt;/li&gt;
&lt;li&gt;Spring Boot Starter and Spring NameSpace are affected by Spring, and their configuration styles are different from YAML.&lt;/li&gt;
&lt;li&gt;Spring Boot Starter and Spring NameSpace are affected by the version of Spring. When users access them, the configuration may not be identified and dependency conflicts may occur. For example, Spring Boot versions 1.x and 2.x have different configuration styles.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://medium.com/codex/shardingsphere-jdbc-driver-released-a-jdbc-driver-that-requires-no-code-modifications-5464c30bcd64?source=your_stories_page-------------------------------------"&gt;ShardingSphere 5.1.2 first supported the introduction of ShardingSphere-JDBC in the form of JDBC Driver.&lt;/a&gt; That means applications only need to configure the Driver provided by ShardingSphere at the JDBC URL before accessing to ShardingSphere-JDBC.&lt;/p&gt;

&lt;p&gt;Removing the Spring configuration simplifies and unifies the configuration mode of ShardingSphere. This adjustment not only simplifies the configuraiton of ShardingSphere when using different configuration modes, but also reduces maintenance work for the ShardingSphere community.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Systematically refactor the DistSQL syntax
&lt;/h3&gt;

&lt;p&gt;One of the characteristics of Apache ShardingSphere is its flexible rule configuration and resource control capability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://shardingsphere.apache.org/document/5.1.0/en/concepts/distsql/"&gt;DistSQL (Distributed SQL)&lt;/a&gt; is ShardingSphere’s SQL-like operating language. It’s used the same way as standard SQL, and is designed to provide incremental SQL operation capability.&lt;/p&gt;

&lt;p&gt;ShardingSphere 5.3.0 systematically refactors DistSQL. The community redesigned the syntax, semantics and operating procedure of DistSQL. The new version is more consistent with ShardingSphere’s design philosophy and focuses on a better user experience.&lt;/p&gt;

&lt;p&gt;Please refer to the &lt;a href="https://shardingsphere.apache.org/"&gt;latest ShardingSphere documentation&lt;/a&gt; for details. A DistSQL roadmap will be available soon, and you’re welcome to leave your feedback.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Refactor the configuration format of &lt;a href="https://shardingsphere.apache.org/document/current/en/quick-start/shardingsphere-proxy-quick-start/"&gt;ShardingSphere-Proxy&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;In this update, ShardingSphere-Proxy has adjusted the configuration format and reduced config files required for startup.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;server.yaml&lt;/code&gt; before refactoring:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rules:
  - !AUTHORITY
    users:
      - root@%:root
      - sharding@:sharding
    provider:
      type: ALL_PERMITTED
  - !TRANSACTION
    defaultType: XA
    providerType: Atomikos
  - !SQL_PARSER
    sqlCommentParseEnabled: true
    sqlStatementCache:
      initialCapacity: 2000
      maximumSize: 65535
    parseTreeCache:
      initialCapacity: 128
      maximumSize: 1024
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;server.yaml&lt;/code&gt; after refactoring:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;authority:
  users:
    - user: root@%
      password: root
    - user: sharding
      password: sharding
  privilege:
    type: ALL_PERMITTED
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;transaction:
  defaultType: XA
  providerType: Atomikos
sqlParser:
  sqlCommentParseEnabled: true
  sqlStatementCache:
    initialCapacity: 2000
    maximumSize: 65535
  parseTreeCache:
    initialCapacity: 128
    maximumSize: 1024
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In ShardingSphere 5.3.0, &lt;code&gt;server.yaml&lt;/code&gt; is no longer required to start Proxy. If no config file is provided by default, Proxy provides the default account root/root.&lt;/p&gt;

&lt;p&gt;ShardingSphere is completely committed to becoming cloud native. Thanks to DistSQL, ShardingSphere-Proxy’s config files can be further simplified, which is more friendly to container deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Release Notes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  API Changes
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;DistSQL: refactor syntax API, please refer to the user manual&lt;/li&gt;
&lt;li&gt;Proxy: change the configuration style of global rule, remove the exclamation mark&lt;/li&gt;
&lt;li&gt;Proxy: allow zero-configuration startup, enable the default account root/root when there is no Authority configuration&lt;/li&gt;
&lt;li&gt;Proxy: remove the default &lt;code&gt;logback.xml&lt;/code&gt; and use API initialization&lt;/li&gt;
&lt;li&gt;JDBC: remove the Spring configuration and use Driver + YAML configuration instead&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Enhancements
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;DistSQL: new syntax &lt;code&gt;REFRESH DATABASE METADATA&lt;/code&gt;, refresh logic database metadata&lt;/li&gt;
&lt;li&gt;Kernel: support DistSQL &lt;code&gt;REFRESH DATABASE METADATA&lt;/code&gt; to load configuration from the governance center and rebuild &lt;code&gt;MetaDataContext&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Support PostgreSQL/openGauss setting transaction isolation level&lt;/li&gt;
&lt;li&gt;Scaling: increase inventory task progress update frequence&lt;/li&gt;
&lt;li&gt;Scaling: &lt;code&gt;DATA_MATCH&lt;/code&gt; consistency check support checkpoint resume&lt;/li&gt;
&lt;li&gt;Scaling: support drop consistency check job via DistSQL&lt;/li&gt;
&lt;li&gt;Scaling: rename column from &lt;code&gt;sharding_total_count&lt;/code&gt; to &lt;code&gt;job_item_count&lt;/code&gt; in job list DistSQL response&lt;/li&gt;
&lt;li&gt;Scaling: add sharding column in incremental task SQL to avoid broadcast routing&lt;/li&gt;
&lt;li&gt;Scaling: sharding column could be updated when generating SQL&lt;/li&gt;
&lt;li&gt;Scaling: improve column value reader for &lt;code&gt;DATA_MATCH&lt;/code&gt; consistency check&lt;/li&gt;
&lt;li&gt;DistSQL: encrypt DistSQL syntax optimization, support like query algorithm&lt;/li&gt;
&lt;li&gt;DistSQL: add properties value check when &lt;code&gt;REGISTER STORAGE UNIT&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;DistSQL: remove useless algorithms at the same time when &lt;code&gt;DROP RULE&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;DistSQL: &lt;code&gt;EXPORT DATABASE CONFIGURATION&lt;/code&gt; supports broadcast tables&lt;/li&gt;
&lt;li&gt;DistSQL: &lt;code&gt;REGISTER STORAGE UNIT&lt;/code&gt; supports heterogeneous data sources&lt;/li&gt;
&lt;li&gt;Encrypt: support &lt;code&gt;Encrypt&lt;/code&gt; LIKE feature&lt;/li&gt;
&lt;li&gt;Automatically start distributed transactions when executing DML statements across multiple shards&lt;/li&gt;
&lt;li&gt;Kernel: support &lt;code&gt;client \d&lt;/code&gt; for PostgreSQL and openGauss&lt;/li&gt;
&lt;li&gt;Kernel: support select group by, order by statement when column contains null values&lt;/li&gt;
&lt;li&gt;Kernel: support parse &lt;code&gt;RETURNING&lt;/code&gt; clause of PostgreSQL/openGauss Insert&lt;/li&gt;
&lt;li&gt;Kernel: SQL &lt;code&gt;HINT&lt;/code&gt; performance improvement&lt;/li&gt;
&lt;li&gt;Kernel: support mysql case when then statement parse&lt;/li&gt;
&lt;li&gt;Kernel: support data source level heterogeneous database gateway&lt;/li&gt;
&lt;li&gt;(Experimental) Sharding: add sharding cache plugin&lt;/li&gt;
&lt;li&gt;Proxy: support more PostgreSQL datetime formats&lt;/li&gt;
&lt;li&gt;Proxy: support MySQL &lt;code&gt;COM_RESET_CONNECTION&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Scaling: improve &lt;code&gt;MySQLBinlogEventType.valueOf&lt;/code&gt; to support unknown event type&lt;/li&gt;
&lt;li&gt;Kernel: support case when for federation&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Bug Fix
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Scaling: fix barrier node created at job deletion&lt;/li&gt;
&lt;li&gt;Scaling: fix part of columns value might be ignored in &lt;code&gt;DATA_MATCH&lt;/code&gt; consistency check&lt;/li&gt;
&lt;li&gt;Scaling: fix jdbc url parameters are not updated in consistency check&lt;/li&gt;
&lt;li&gt;Scaling: fix tables sharding algorithm type &lt;code&gt;INLINE&lt;/code&gt; is case-sensitive&lt;/li&gt;
&lt;li&gt;Scaling: fix incremental task on MySQL require mysql system database permission&lt;/li&gt;
&lt;li&gt;Proxy: fix the NPE when executing select SQL without storage node&lt;/li&gt;
&lt;li&gt;Proxy: support &lt;code&gt;DATABASE_PERMITTED&lt;/code&gt; permission verification in unicast scenarios&lt;/li&gt;
&lt;li&gt;Kernel: fix the wrong value of &lt;code&gt;worker-id&lt;/code&gt; in show compute nodes&lt;/li&gt;
&lt;li&gt;Kernel: fix route error when the number of readable data sources and weight configurations of the Weight algorithm are not equal&lt;/li&gt;
&lt;li&gt;Kernel: fix multiple groups of readwrite-splitting refer to the same load balancer name, and the load balancer fails problem&lt;/li&gt;
&lt;li&gt;Kernel: fix can not disable and enable compute node problem&lt;/li&gt;
&lt;li&gt;JDBC: fix data source is closed in ShardingSphereDriver cluster mode when startup problem&lt;/li&gt;
&lt;li&gt;Kernel: fix wrong rewrite result when part of logical table name of the binding table is consistent with the actual table name, and some are inconsistent&lt;/li&gt;
&lt;li&gt;Kernel: fix startup exception when use SpringBoot without configuring rules&lt;/li&gt;
&lt;li&gt;Encrypt: fix null pointer exception when Encrypt value is null&lt;/li&gt;
&lt;li&gt;Kernel: fix oracle parsing does not support varchar2 specified type&lt;/li&gt;
&lt;li&gt;Kernel: fix serial flag judgment error within the transaction&lt;/li&gt;
&lt;li&gt;Kernel: fix cursor fetch error caused by &lt;code&gt;wasNull&lt;/code&gt; change&lt;/li&gt;
&lt;li&gt;Kernel: fix alter transaction rule error when refresh metadata&lt;/li&gt;
&lt;li&gt;Encrypt: fix &lt;code&gt;EncryptRule&lt;/code&gt; cast to &lt;code&gt;TransparentRule&lt;/code&gt; exception that occurs when the call procedure statement is executed in the &lt;code&gt;Encrypt&lt;/code&gt; scenario&lt;/li&gt;
&lt;li&gt;Encrypt: fix exception which caused by &lt;code&gt;ExpressionProjection&lt;/code&gt; in shorthand projection&lt;/li&gt;
&lt;li&gt;Proxy: fix PostgreSQL Proxy int2 negative value decoding incorrect&lt;/li&gt;
&lt;li&gt;Proxy: PostgreSQL/openGauss support describe insert returning clause&lt;/li&gt;
&lt;li&gt;Proxy: fix gsql 3.0 may be stuck when connecting Proxy&lt;/li&gt;
&lt;li&gt;Proxy: fix parameters are missed when checking SQL in Proxy backend&lt;/li&gt;
&lt;li&gt;Proxy: enable MySQL Proxy to encode large packets&lt;/li&gt;
&lt;li&gt;Kernel: fix oracle parse comment without whitespace error&lt;/li&gt;
&lt;li&gt;DistSQL: fix show create table for encrypt table&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Refactor
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Scaling: reverse table name and column name when generating SQL if it’s SQL keyword&lt;/li&gt;
&lt;li&gt;Scaling: improve incremental task failure handling&lt;/li&gt;
&lt;li&gt;Kernel: governance center node adjustment, unified hump to underscore&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;p&gt;🔗 &lt;a href="https://shardingsphere.apache.org/document/current/en/downloads/"&gt;Download Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/apache/shardingsphere/discussions/22564"&gt;Release Notes&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://shardingsphere.apache.org/"&gt;Project Address&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/apache/shardingsphere-on-cloud"&gt;ShardingSphere-on-Cloud&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Community Contribution
&lt;/h2&gt;

&lt;p&gt;This Apache ShardingSphere 5.3.0 release is the result of 687 merged PRs, committed by 49 contributors. Thank you for your efforts.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ueAutVGP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1100/1%2A4u4wMqiMiu6l-YY-O2JLew.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ueAutVGP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1100/1%2A4u4wMqiMiu6l-YY-O2JLew.webp" alt="contributor" width="880" height="659"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>opensource</category>
      <category>database</category>
      <category>cloud</category>
    </item>
    <item>
      <title>ShardingSphere-on-Cloud &amp; Pisanix replace Sidecar for a true cloud-native experience</title>
      <dc:creator>Apache ShardingSphere</dc:creator>
      <pubDate>Fri, 25 Nov 2022 02:55:06 +0000</pubDate>
      <link>https://dev.to/apache_shardingsphere/shardingsphere-on-cloud-pisanix-replace-sidecar-for-a-true-cloud-native-experience-d36</link>
      <guid>https://dev.to/apache_shardingsphere/shardingsphere-on-cloud-pisanix-replace-sidecar-for-a-true-cloud-native-experience-d36</guid>
      <description>&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;For a while, many of our blog posts have shown that &lt;strong&gt;&lt;a href="https://shardingsphere.apache.org/"&gt;ShardingSphere&lt;/a&gt; consists of three independent products: &lt;a href="https://shardingsphere.apache.org/document/current/en/overview/#shardingsphere-jdbc"&gt;ShardingSphere-JDBC&lt;/a&gt;, &lt;a href="https://shardingsphere.apache.org/document/current/en/quick-start/shardingsphere-proxy-quick-start/"&gt;ShardingSphere-Proxy&lt;/a&gt;, and ShardingSphere-Sidecar.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As ShardingSphere has become increasingly popular, JDBC and Proxy have been used in numerous production environments, but Sidecar’s status has remaing as “&lt;strong&gt;under planning&lt;/strong&gt;”. you may have noticed this on our GitHub &lt;code&gt;READ ME&lt;/code&gt; or our website.&lt;/p&gt;

&lt;p&gt;With cloud-native gaining momentum, many enterprises are choosing to use databases on the cloud or cloud native databases. This represents an excellent opportunity for ShardingSphere-Sidecar, which is positioned as a &lt;strong&gt;cloud-native database proxy in &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;However, some of you may have noticed that any mention of Sidecar has disappeared from ShardingSphere’s latest documentation. Has ShardingSphere-Sidecar been canceled? What will ShardingSphere’s cloud native future look like? Here’s what’s coming.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s ShardingSphere’s plan for the cloud?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Stop the R&amp;amp;D of ShardingSphere-Sidecar
&lt;/h3&gt;

&lt;p&gt;As far as ShardingSphere users are concerned, ShardingSphere-JDBC and ShardingSphere-Proxy can already meet most requirements. ShardingSphere-Sidecar’s only difference from them is in the deployment form. JDBC and Proxy are functionally identical, but each has unique advantages.&lt;/p&gt;

&lt;p&gt;ShardingSphere-JDBC is positioned as a lightweight Java framework that provides additional services at theJava JDBC layer. With the client connecting directly to the database, it provides services in the form of &lt;code&gt;jar&lt;/code&gt; and requires no extra deployment and dependence. It can be viewed as an enhanced JDBC driver, fully compatible with JDBC and all kinds of &lt;a href="https://stackoverflow.com/questions/1279613/what-is-an-orm-how-does-it-work-and-how-should-i-use-one"&gt;ORM&lt;/a&gt; frameworks. It’s targeted at developers and boasts higher performance.&lt;/p&gt;

&lt;p&gt;ShardingSphere-Proxy is a transparent database proxy, supporting heterogeneous languages through the implementation of database binary protocol. Currently, &lt;a href="https://www.mysql.com/"&gt;MySQL&lt;/a&gt; and &lt;a href="https://www.postgresql.org/"&gt;PostgreSQL&lt;/a&gt; protocols are provided. Nevertheless, Proxy adds a gateway between the database and the front-end application layer, which will partially lower the performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Our community recommends users to adopt a hybrid deployment mode allowing JDBC and Proxy to complement each other, maximizing ShardingSphere’s advantages in terms of performance, availability and heterogeneous support.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o3tmjjzT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jqs3dzu3k6gbirylw85u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o3tmjjzT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jqs3dzu3k6gbirylw85u.png" alt="Image description" width="828" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, it’s a bit awkward for ShardingSphere-Sidecar: JDBC and Proxy are applicable to most scenarios, environments and businesses, and can complement each other, leaving little room for Sidecar to innovate. From the viewpoint of the community and its users, Sidecar is more like an extension in deployment mode, and it is not capable of enhancing ShardingSphere as a whole.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Taking the above factors into consideration, it is more efficient to develop a patch tool for ShardingSphere that can be easily used and run in a Kubernetes environment. This way, users can deploy and use ShardingSphere in cloud native environments, while saving R&amp;amp;D time for the ShardingSphere community.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ShardingSphere’s cloud solution: &lt;a href="https://github.com/apache/shardingsphere-on-cloud"&gt;ShardingSphere-On-Cloud&lt;/a&gt;
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;ShardingSphere-on-Cloud is a comprehensive system upgrade based on ShardingSphere.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;ShardingSphere-Sidecar was born when Kubernetes thrived. Back then, more and more enterprises were trying to adopt cloud-native concepts. The ShardingSphere community is not an exception.&lt;/p&gt;

&lt;p&gt;We proposed ShardingSphere-Sidecar to promote cloud-native transformation in the data field. However, since JDBC and Proxy are mature enough to deal with data governance in most scenarios, it’s unnecessary to make ShardingSphere entirely cloud native.&lt;/p&gt;

&lt;p&gt;Sidecar can indeed play a big role in certain scenarios, but it doesn’t mean that we have to create a Sidecar version for each component. ShardingSphere is working on how to come up with a solution based on real cloud-native scenarios after fully integrating the cloud computing concept. That’s how ShardingSphere-on-Cloud was born.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/apache/shardingsphere-on-cloud"&gt;ShardingSphere-on-Cloud&lt;/a&gt; is capable of deploying and migrating ShardingSphere in a Kubernetes environment. With the help of &lt;a href="https://aws.amazon.com/cloudformation/"&gt;AWS&lt;/a&gt; &lt;a href="https://aws.amazon.com/cloudformation/"&gt;CloudFormation&lt;/a&gt;, &lt;a href="https://helm.sh/"&gt;Helm&lt;/a&gt;, Operator, and Terraform (coming soon) and other tools, it provides best practices with quick deployment, higher observability, security and migration, and high availability deployment in a cloud native environment.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please refer to &lt;a href="https://medium.com/codex/database-plus-embracing-the-cloud-shardingsphere-on-cloud-solution-released-29916290ad06?source=your_stories_page-------------------------------------"&gt;Database Plus Embracing the Cloud: ShardingSphere-on-Cloud Solution Released&lt;/a&gt; for details.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Achieving the vision of ShardingSphere-Sidecar through &lt;a href="https://www.pisanix.io/"&gt;Pisanix&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Why did we develop a new open source project oriented towards data governance in cloud native scenarios?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our community has been contemplating the position of ShardingSphere and &lt;a href="https://medium.com/faun/database-mesh-2-0-database-governance-in-a-cloud-native-environment-ac24080349eb?source=your_stories_page-------------------------------------"&gt;Database Mesh&lt;/a&gt; concept.&lt;/p&gt;

&lt;p&gt;Within the community we hold different viewpoints on Sidecar at different stages. In the beginning, the community wanted to use Sidecar to manage cloud data issues. As the community gained a deeper understanding of cloud native and cloud data management processes, the limitations of ShardingSphere-Sidecar have been gradually exposed.&lt;/p&gt;

&lt;p&gt;ShardingSphere-Sidecar is only a deployment mode of ShardingSphere in cloud native environments, so it can only solve a single problem. It is incapable of helping ShardingSphere develop a mature and cloud-native solution for enterprises.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Therefore, we needed to redesign an open source product with higher adaptability, availability and agility in a cloud native system - in order to make up for ShardingSphere’s limitations on cloud data governance.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That is why some of our community members at &lt;a href="https://www.sphere-ex.com/en/"&gt;SphereEx&lt;/a&gt; developed &lt;a href="https://www.sphere-ex.com/news/43/"&gt;Pisanix&lt;/a&gt;, &lt;a href="https://www.sphere-ex.com/news/43/"&gt;a cloud-native data governance tool&lt;/a&gt;, based on the Database Mesh concept. It can provide capabilities such as SQL-aware traffic governance, runtime resource-oriented management and DBRE.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is Pisanix the same with ShardingSphere-Sidecar?
&lt;/h3&gt;

&lt;p&gt;ShardingSphere-Sidecar and Pisanix convey different understandings of Database Mesh. They are different in the following aspects.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Different design concepts:&lt;/strong&gt; the design philosophy of JDBC &amp;amp; Proxy is Database Plus, which adds an interface on top of multiple data sources for unified governance and enhancement. Pisanix represents the specific practice of Database Mesh concept, leading to efficient and smooth DevOps of databases for applications in cloud native scenarios.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Different language ecosystems:&lt;/strong&gt; JDBC is developed solely in Java. As Java is popular with numerous enterprise-grade community users and developers, JDBC can be easily applied to Java-developed applications. In comparison, Pisanix is developed in &lt;a href="https://www.rust-lang.org/"&gt;Rust&lt;/a&gt; to improve the reliability and efficiency of the access layer.
Despite small differences, both of them are oriented towards cloud-native data infrastructure. That is also what Database Mesh expects in the long term by implementing cloud native DBRE.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In terms of deployment mode, Pisanix and ShardingSphere-Sidecar can both be deployed with business applications in the form of Sidecar, providing standard protocol access for developers. &lt;strong&gt;Moreover, Pisanix is highly compatible with the ShardingSphere ecosystem. You can connect Pisanix to ShardingSphere-Proxy in the same way that ShardingSphere-Proxy connects to MySQL.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In short, ShardingSphere presents a complete database form for developers and applications under the Database Plus concept. Pisanix is designed for the same purpose. Through Pisanix, the entrance to cloud data traffic, users can use ShardingSphere-Proxy as a database and explore the collaboration mode in a cloud native environment.&lt;/p&gt;

&lt;p&gt;However, they belong to independent product lines. Pisanix followed the Database Mesh concept from the very beginning and achieved high-performance expansion through four aspects, including &lt;strong&gt;local database, unified configuration and management, multi-protocol support and cloud native architecture.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pisanix is only the first step towards unifying database types on the cloud, and Sidecar is only a deployment form.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The protocols and DevOps features of different databases vary, and the point lies in abstracting the standard governance behavior.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Unlike Pisanix, the ShardingSphere ecosystem is not only accessible through protocols, and ShardingSphere-JDBC can be used more conveniently by Java applications.&lt;/p&gt;

&lt;p&gt;Its natural compatibility maintains functional integrity, optimizes resource utilization and provides ultimate performance, with which developers can configure data governance at their will from the perspective of businesses. Meanwhile, by combining ShardingSphere and the underlying databases, users can deploy strong computing capability on the application side and transform the original monolithic database into a distributed database with high performance, optimizing resource allocation and offering a cost-effective solution.&lt;/p&gt;

&lt;p&gt;In conclusion, ShardingSphere and Pisanix together offer two solutions for community users. **For users who’d like to deploy ShardingSphere in a Kubernetes environment, ShardingSphere-on-Cloud is enough, and ShardingSphere’s other features are exactly the same as when used locally.&lt;/p&gt;

&lt;p&gt;For users looking to achieve unified traffic governance on the upper-layer database in cloud native scenarios, Pisanix is a better choice.**&lt;/p&gt;

&lt;p&gt;Compared with ShardingSphere-Sidecar, ShardingSphere-on-Cloud combined with Pisanix is more effective and convenient.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://shardingsphere.apache.org/"&gt;ShardingSphere Official Website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.database-mesh.io/"&gt;Database Mesh Official Website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.pisanix.io/"&gt;Pisanix Official Website&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>database</category>
      <category>opensource</category>
      <category>sql</category>
    </item>
    <item>
      <title>ElasticJob UI now supports Auth 2.0, OIDC and SAML single sign-on thanks to Casdoor</title>
      <dc:creator>Apache ShardingSphere</dc:creator>
      <pubDate>Thu, 24 Nov 2022 03:12:42 +0000</pubDate>
      <link>https://dev.to/apache_shardingsphere/elasticjob-ui-now-supports-auth-20-oidc-and-saml-single-sign-on-thanks-to-casdoor-4mf0</link>
      <guid>https://dev.to/apache_shardingsphere/elasticjob-ui-now-supports-auth-20-oidc-and-saml-single-sign-on-thanks-to-casdoor-4mf0</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;If you‘re looking to add SSO to the administration console when using ElasticJob UI, this article will help you tackle this user management problem using ElasticJob UI’s built-in Casdoor.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Background
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/apache/shardingsphere-elasticjob-ui"&gt;ElasticJob UI&lt;/a&gt; is the visual admin console of &lt;a href="https://github.com/apache/shardingsphere-elasticjob"&gt;ElasticJob&lt;/a&gt;, whose target users are developers and DevOps teams rather than users. Generally, it is deployed only in the internal environment and thus its R&amp;amp;D focus more on its features.&lt;/p&gt;

&lt;p&gt;Any open source project will inevitably become an object of interest for security researchers. Previously, security researchers submitted a lot of security issues on ElasticJob UI to the &lt;a href="https://shardingsphere.apache.org/"&gt;Apache ShardingSphere&lt;/a&gt; community without considering its actual usage scenario.&lt;/p&gt;

&lt;p&gt;After careful considerations regarding dealing with those security vulnerability reports, the community decided to stop maintaining ElasticJob UI.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://casdoor.org/"&gt;Casdoor&lt;/a&gt; community members noticed our discussion about stopping the maintenance of ElasticJob UI and thought it would be a loss to deactivate ElasticJob UI because of unrealistic security reports.&lt;/p&gt;

&lt;p&gt;If ElasticJob UI would be connected to Casdoor, it would be improved in terms of authentication security and features. In this context, the Casdoor and Apache ShardingSphere community reached a consensus on collaboration.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to connect ElasticJob UI to Casdoor?
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Step 1: Deploy Casdoor
&lt;/h4&gt;

&lt;p&gt;The source code of Casdoor is on &lt;a href="https://github.com/casdoor/casdoor"&gt;GitHub&lt;/a&gt;, and its boot mode includes development mode and production mode. The development mode is taken as an example here. Please refer to this &lt;a href="https://casdoor.org/docs/basic/server-installation"&gt;link&lt;/a&gt; for more details.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backend boot mode&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go run main.go
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Front-end boot mode&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd web
yarn install
yarn start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 2: Configure Casdoor and obtain the required data
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lP_0uALJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p3w82hjsgt9s0v7ctzac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lP_0uALJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p3w82hjsgt9s0v7ctzac.png" alt="Image description" width="828" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The red arrows indicate what the backend configuration requires, with “Redirect URLs” referring to the address where you perform a callback.&lt;/p&gt;

&lt;p&gt;We also need to find the corresponding cert we selected in the cert option, such as cert-build-in here. A certificate is also needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KJz15vIs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fadan0b9fdfkrqdqzlqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KJz15vIs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fadan0b9fdfkrqdqzlqq.png" alt="Image description" width="828" height="712"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For additional Casdoor documentation, please refer to this &lt;a href="https://casdoor.org/docs/overview"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: Configuration in ElasticJob UI
&lt;/h4&gt;

&lt;p&gt;Find &lt;code&gt;application.properties&lt;/code&gt; in &lt;a href="https://github.com/apache/shardingsphere-elasticjob-ui"&gt;shardingsphere-elasticjob-ui&lt;/a&gt; and configure it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dOvN4RWr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lhivvixapvf1h7nonpg5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dOvN4RWr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lhivvixapvf1h7nonpg5.png" alt="Image description" width="378" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Paste the data we obtained from Casdoor into the corresponding position as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c-jl_G6P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5hn87ysy3jsfahsdxbxd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c-jl_G6P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5hn87ysy3jsfahsdxbxd.png" alt="Image description" width="828" height="836"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we can use Casdoor in ElasticJob UI.&lt;/p&gt;

&lt;p&gt;Once the ElasticJob’s admin console connects to Casdoor, it will support UI-first centralized identity access/single sign-on based on &lt;a href="https://oauth.net/2/"&gt;OAuth 2.0&lt;/a&gt;, &lt;a href="https://openid.net/connect/"&gt;OIDC&lt;/a&gt; and &lt;a href="https://auth0.com/blog/how-saml-authentication-works/"&gt;SAML&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thanks to developers from the Casdoor and Apache ShardingSphere community, our collaboration has been going on in a smooth and friendly way. At first, &lt;a href="https://github.com/jakiuncle"&gt;jakiuncle&lt;/a&gt; from Casdoor proposed an issue and committed a PR, and then our Committer &lt;a href="https://github.com/TeslaCN"&gt;TeslaCN&lt;/a&gt; and PMC &lt;a href="https://github.com/tristaZero"&gt;tristaZero&lt;/a&gt; reviewed the PR. This cross-community collaboration stands as a testament to the Beaty of open source.&lt;/p&gt;

&lt;h3&gt;
  
  
  About ElasticJob UI
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/apache/shardingsphere-elasticjob"&gt;ElasticJob&lt;/a&gt; is a distributed scheduling solution oriented towards Internet applications and massive tasks.&lt;/p&gt;

&lt;p&gt;It provides elastic scheduling, resource management, and job governance combined with open architecture design, building a diversified job ecosystem. It uses a unified job API for each project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/apache/shardingsphere-elasticjob-ui"&gt;ElasticJob-UI&lt;/a&gt; is the visual admin console of ElasticJob, supporting dynamic configuration, job management and control, job history retrieval and other features.&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/apache/shardingsphere-elasticjob-ui"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  About Casdoor
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://casdoor.org/"&gt;Casdoor&lt;/a&gt; is a UI-first identity access management (IAM) / single-sign-on (SSO) platform based on OAuth 2.0 / OIDC. Casdoor can help you solve user management problems. There’s no need to develop a series of authentication features such as user login and registration. It can manage the user module entirely in a few simple steps in conjunction with the host application. It’s convenient, easy-to-use and powerful.&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/casdoor/casdoor"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>opensource</category>
      <category>devops</category>
      <category>database</category>
    </item>
    <item>
      <title>Alibaba Cloud’s OpenSergo &amp; ShardingSphere release database governance standard for microservices</title>
      <dc:creator>Apache ShardingSphere</dc:creator>
      <pubDate>Thu, 17 Nov 2022 03:55:15 +0000</pubDate>
      <link>https://dev.to/apache_shardingsphere/alibaba-clouds-opensergo-shardingsphere-release-database-governance-standard-for-microservices-5h4f</link>
      <guid>https://dev.to/apache_shardingsphere/alibaba-clouds-opensergo-shardingsphere-release-database-governance-standard-for-microservices-5h4f</guid>
      <description>&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;Recently, &lt;a href="https://opensergo.io/" rel="noopener noreferrer"&gt;Alibaba Cloud&lt;/a&gt;’s OpenSergo and &lt;a href="https://shardingsphere.apache.org/" rel="noopener noreferrer"&gt;ShardingSphere&lt;/a&gt; jointly released the database governance standard for microservices. By combining the &lt;a href="https://medium.com/faun/whats-the-database-plus-concepand-what-challenges-can-it-solve-715920ba65aa?source=your_stories_page-------------------------------------" rel="noopener noreferrer"&gt;Database Plus&lt;/a&gt; and &lt;a href="https://medium.com/faun/shardingsphere-database-mesh-4ad75bf4bac8?source=your_stories_page-------------------------------------" rel="noopener noreferrer"&gt;Database Mesh&lt;/a&gt; concepts, the two&lt;br&gt;
communities have standardized the existing database governance concepts, patterns, and paths, further completing the database governance ecosystem under the cloud-native environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffweotcqjpl7mwl220d1t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffweotcqjpl7mwl220d1t.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;The founders of both communities expressed their opinions concerning the collaboration between the ShardingSphere community and the OpenSergo community:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zhang Liang, the PMC Chair of the Apache ShardingSphere community:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;in the microservices field, the interaction and collaboration between services have been gradually perfected, but there is still no effective standard for services to access the database. Being, ShardingSphere has been continuously following the &lt;strong&gt;“connect, enhance, and pluggable”&lt;/strong&gt; design philosophy. “Connect” refers to providing standardized protocols and interfaces, breaking the barriers for development languages to access heterogeneous databases. It’s forward-looking for OpenSergo to propose the microservice-oriented governance standard and take the initiative to include database access in the standard. I’m pleased to see ShardingSphere and OpenSergo working together to build the standard for microservices which is a pivotal entrance to access databases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zhao Yihao, the founder of the OpenSergo community:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;in microservice governance, in addition to the governance of the microservices itself, it is also a critical step to ensure business reliability and continuity to deal with microservices’ access to databases. As a Top-Level project in the database governance field, ShardingSphere has integrated a wealth of best practices and technical experience, which complements OpenSergo. In this context, we work with the ShardingSphere community to jointly build a database governance standard from the perspective of microservices, so that enterprises can carry out unified and standardized governance and management on different data layer frameworks and traffic.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; database governance in this article includes all aspects of database governance in microservice systems. All business information and key data need a robust and stable database system as it is the most important state terminal.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The database is increasingly vital in the microservice system
&lt;/h2&gt;

&lt;p&gt;To meet flexible business requirements, the application architecture can be transformed from monolithic to service-oriented and then to microservice-oriented. In this case, the database used to store the core data becomes the focus of the distributed system.&lt;/p&gt;

&lt;p&gt;Enterprises take advantage of microservices to separate services and adopt a distributed architecture to achieve loose coupling among multiple services, flexible adjustment and combination of services, and high availability of systems. In particular, microservices indeed have delivered enormous benefits in the face of rapidly changing businesses.&lt;/p&gt;

&lt;p&gt;However, after services are separated, their corresponding underlying databases should also be separated to ensure the independent deployment of each service. In this way, each service can be an independent unit, finally achieving the microservices. In this context, the database is becoming increasingly complicated:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The transformation from monolithic to microservice-oriented architecture models leads to increasingly complicated services, diversified requirements, larger scale infrastructure, complex call relations between services, and higher requirements on the underlying database performance.&lt;/li&gt;
&lt;li&gt;Different transactions usually involve multiple services — but it is a challenge to ensure data consistency between services.&lt;/li&gt;
&lt;li&gt;It is also challenging to query data across multiple services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With most backend applications, their system performance improvement is mainly limited to databases. Particularly in a microservice environment, it is a team’s top priority to deal with database performance governance. Database governance naturally becomes an indispensable part of microservice governance.&lt;/p&gt;

&lt;p&gt;In database governance, we now mainly focus on &lt;strong&gt;read/write splitting, sharding, shadow databases, database discovery, and distributed transactions.&lt;/strong&gt; At the same time, how to use databases and the actual database storage nodes rely on the &lt;strong&gt;virtual database and database endpoint.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In response to the above-mentioned problems, OpenSergo and ShardingSphere have assimilated the latter’s database governance experience and released the &lt;strong&gt;database governance standard under microservices&lt;/strong&gt;. By doing so, they can standardize the database governance method, lower the entry barriers of this field, and improve the business applicability of microservices.&lt;/p&gt;

&lt;h2&gt;
  
  
  ShardingSphere’s strategies on database traffic governance
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. VirtualDatabase&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In database governance, whether it is read/write splitting, sharding, shadow database or encryption, audit, and access control, they all have to act on a specific database. Here such a logical database is called a virtual database, namely VirtualDatabase.&lt;/p&gt;

&lt;p&gt;From the application's viewpoint, VirtualDatabase refers to a set of specific database access information, which can achieve governance capability by binding corresponding governance strategies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. DatabaseEndpoint&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In database governance, VirtualDatabase declares the logical database available to applications, but actually, data storage depends on a physical database. Here it is called database access endpoint, namely DatabaseEndpoint.&lt;/p&gt;

&lt;p&gt;DatabaseEndpoint is imperceptible to applications. It can only be bound to VirtualDatabase by a specific governance strategy and then be connected and used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. ReadWriteSplitting&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Read/write splitting is a commonly used database extension method. The primary database is used for transactional read/write operations, while the secondary database is mainly used for queries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Sharding&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Data sharding is an extension strategy based on data attributes. Once data attributes are calculated, requests are sent to a specific data backend. Currently, sharding consists of sharding with shard keys and automatic sharding. Sharding with shard keys needs to specify which table or column is to be sharded and the algorithms used for sharding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Encryption&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To meet the requirements of audit security and compliance, enterprises need to provide strict security measures for data storage, such as data encryption.&lt;/p&gt;

&lt;p&gt;Data encryption parses the SQL entered by users and rewrites SQL according to the encryption rules provided by users.&lt;/p&gt;

&lt;p&gt;By doing so, the plaintext data can be encrypted, and plaintext data (optional) and ciphertext data can be both stored in the underlying database.&lt;/p&gt;

&lt;p&gt;When the user queries the data, Encryption only takes the ciphertext data from the database, decrypts it, and finally returns the decrypted original data to the user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Shadow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Shadow database can receive grayscale traffic or data test requests in a grayscale environment or test environment, and flexibly configure multiple routing methods combined with shadow algorithms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. DatabaseDiscovery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Database auto-discovery can detect the change of data source status according to the high availability configuration of the database and then make adjustments to traffic strategy accordingly.&lt;/p&gt;

&lt;p&gt;For example, if the backend data source is &lt;a href="https://www.mysql.com/" rel="noopener noreferrer"&gt;MySQL&lt;/a&gt; MGR, you can set the database discovery type as MYSQL.MGR, specify &lt;code&gt;group-name&lt;/code&gt; and configure corresponding heartbeat intervals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. DistributedTransaction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It can declare configurations related to distributed transactions. Users can declare the transaction types without additional configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database Governance Example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Virtual database configuration
apiVersion: database.opensergo.io/v1alpha1
kind: VirtualDatabase
metadata:
  name: sharding_db
spec:
  services:
  - name: sharding_db
    databaseMySQL:
      db: sharding_db
      host: localhost
      port: 3306
      user: root
      password: root
    sharding: "sharding_db"  # Declare the desired sharding strategy.
---
# The database endpoint configuration of the first data source
apiVersion: database.opensergo.io/v1alpha1
kind: DatabaseEndpoint
metadata:
  name: ds_0
spec:
  database:
    MySQL:                 # Declare the backend data source type and other related information.
      url: jdbc:mysql://192.168.1.110:3306/demo_ds_0?serverTimezone=UTC&amp;amp;useSSL=false
      username: root
      password: root
      connectionTimeout: 30000
      idleTimeoutMilliseconds: 60000
      maxLifetimeMilliseconds: 1800000
      maxPoolSize: 50
      minPoolSize: 1      
---
# The database endpoint configuration of the second data source
apiVersion: database.opensergo.io/v1alpha1
kind: DatabaseEndpoint
metadata:
  name: ds_1
spec:
  database:
    MySQL:                              # Declare the backend data source type and other related information.
      url: jdbc:mysql://192.168.1.110:3306/demo_ds_1?serverTimezone=UTC&amp;amp;useSSL=false
      username: root
      password: root
      connectionTimeout: 30000
      idleTimeoutMilliseconds: 60000
      maxLifetimeMilliseconds: 1800000
      maxPoolSize: 50
      minPoolSize: 1
---
# Sharding configuration
apiVersion: database.opensergo.io/v1alpha1
kind: Sharding
metadata:
  name: sharding_db
spec:
  tables: # map[string]object type
    t_order:
      actualDataNodes: "ds_${0..1}.t_order_${0..1}"
      tableStrategy:
        standard:
          shardingColumn: "order_id"
          shardingAlgorithmName: "t_order_inline"
      keyGenerateStrategy:
        column: "order_id"
        keyGeneratorName: "snowflake"
    t_order_item:
      actualDataNodes: "ds_${0..1}.t_order_item_${0..1}"
      tableStrategy:
        standard:
          shardingColumn: "order_id"
          shardingAlgorithmName: "t_order_item_inline"
      keyGenerateStrategy:
        column: order_item_id
        keyGeneratorName: snowflake
  bindingTables:
  - "t_order,t_order_item"
  defaultDatabaseStrategy:
    standard:
     shardingColumn: "user_id"
     shardingAlgorithmName: "database_inline"
  # defaultTableStrategy: # Null means none 
  shardingAlgorithms: # map[string]object type
    database_inline:
      type: INLINE    
      props: # map[string]string type
        algorithm-expression: "ds_${user_id % 2}"
    t_order_inline:  
      type: INLINE    
      props:
        algorithm-expression: "d_order_${order_id % 2}"      
    t_order_item_inline:
      type: INLINE    
      props:
        algorithm-expression: "d_order_item_${order_id % 2}"
  keyGenerators: # map[string]object type
    snowflake:
      type: SNOWFLAKE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  About Apache ShardingSphere
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://shardingsphere.apache.org/" rel="noopener noreferrer"&gt;Apache ShardingSphere&lt;/a&gt; is a distributed database ecosystem that can transform any database into a distributed database and enhance it with sharding, elastic scaling, encryption features &amp;amp; more.&lt;/p&gt;

&lt;p&gt;Apache ShardingSphere follows the Database Plus concept, designed to build an ecosystem on top of fragmented heterogeneous databases. It focuses on how to fully use the computing and storage capabilities of databases rather than creating a brand-new database. It attaches greater importance to the collaboration between multiple databases instead of the database itself.&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://linktr.ee/ApacheShardingSphere" rel="noopener noreferrer"&gt;Apache ShardingSphere Useful Links&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  About OpenSergo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://opensergo.io/" rel="noopener noreferrer"&gt;OpenSergo&lt;/a&gt; is an open and universal service governance specification that is oriented toward distributed service architecture and covers a full-link heterogeneous ecosystem.&lt;/p&gt;

&lt;p&gt;It is formed based on the industry's service governance scenarios and practices. The biggest characteristic of OpenSergo is defining service governance rules with a unified set of configuration/DSL/protocol and is oriented towards multi-language heterogeneous architecture, achieving full-link ecosystem coverage.&lt;/p&gt;

&lt;p&gt;No matter if the microservice language is Java, Go, Node.js, or some other language, or whether it’s a standard microservice or Mesh-based access, developers can use the same set of OpenSergo CRD standard configurations.&lt;/p&gt;

&lt;p&gt;This allows developers to implement unified governance and control for each layer, ranging from the gateway to microservices, from database to cache, and from registration and discovery to the configuration of services.&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/opensergo/opensergo-specification" rel="noopener noreferrer"&gt;OpenSergo GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>database</category>
      <category>opensource</category>
      <category>community</category>
      <category>microservices</category>
    </item>
    <item>
      <title>ElasticJob 3.0.2 is released including failover optimization, scheduling stability, and Java 19 compatibility</title>
      <dc:creator>Apache ShardingSphere</dc:creator>
      <pubDate>Mon, 31 Oct 2022 05:51:47 +0000</pubDate>
      <link>https://dev.to/apache_shardingsphere/elasticjob-302-is-released-including-failover-optimization-scheduling-stability-and-java-19-compatibility-25on</link>
      <guid>https://dev.to/apache_shardingsphere/elasticjob-302-is-released-including-failover-optimization-scheduling-stability-and-java-19-compatibility-25on</guid>
      <description>&lt;p&gt;&lt;a href="https://shardingsphere.apache.org/elasticjob"&gt;ElasticJob&lt;/a&gt;, one of the sub-projects of the &lt;a href="https://github.com/apache/shardingsphere"&gt;Apache ShardingSphere&lt;/a&gt; community, is a distributed scheduling solution oriented towards Internet applications and massive tasks.&lt;/p&gt;

&lt;p&gt;Since ElasticJob 3.0.1 was released, we’ve received a lot of feedback from users on &lt;a href="https://github.com/apache/shardingsphere-elasticjob"&gt;GitHub&lt;/a&gt;. After merging the code committed by the contributors, ElasticJob has been significantly optimized.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/apache/shardingsphere-elasticjob/blob/master/RELEASE-NOTES.md"&gt;resulting ElasticJob 3.0.2&lt;/a&gt; has been improved in terms of Failover, scheduling stability, job API, and Java 19 compatibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  Release 3.0.2 Key Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Failover Optimization
&lt;/h3&gt;

&lt;p&gt;Failover is an important ElasticJob feature.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In ElasticJob 3.0.1, the Failover feature had some limitations and disadvantages:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Failover could take effect with at least two job instances. Assume that there’s only one job instance executing tasks and a fault occurs, Failover cannot take effect even if the instance recovers or a new job instance is started after the fault occurs. In this case, the old Failover feature required redundant resources. Particularly, if sharding was not needed for the job, the backup instance would be idle for a long time and continuously occupy resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Failover was sensitive. Even if the job was not in execution, as long as one instance goes offline, failover of another instance would still be triggered to immediately execute the sharding owned by the offline instance. In other words, failover was triggered by the offline instance whether it is a normal offline or a fault occurs, which is disturbing for users.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;ElasticJob 3.0.2 optimized the Failover feature.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Suppose a single job instance is executing the tasks and it exits due to a fault in the execution process. &lt;strong&gt;After a fault occurs, a new job instance is started, which can detect the failed sharding and trigger Failover immediately.&lt;/strong&gt; For example, we deploy a job instance with one copy in the Kubernetes environment. If the job instance exits unexpectedly due to a fault, Kubernetes automatically starts a new job instance. By this time, Failover will be triggered and continue to execute the previously failed job.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The optimized Failover takes effect only for ongoing jobs.&lt;/strong&gt; If no job is being executed when a fault occurs, Failover is not triggered. In this case, Failover will not be triggered when the job instance goes offline normally.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Separating the event notification threads of different jobs to avoid an endless loop.
&lt;/h3&gt;

&lt;p&gt;ElasticJob achieves distributed coordination through &lt;a href="https://zookeeper.apache.org/"&gt;ZooKeeper&lt;/a&gt;. In practical scenarios, users may start multiple jobs in the same project simultaneously, all of which use the same &lt;a href="https://curator.apache.org/"&gt;Apache Curator&lt;/a&gt; client. There are certain risks due to the nature of ZooKeeper and the &lt;a href="https://cwiki.apache.org/confluence/display/CURATOR/TN1"&gt;callback method of Curator in a single event thread&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Callbacks will increase accordingly if a huge number of jobs are triggered frequently. The processing capability of Curator event threads may reach an upper limit, resulting in delayed triggering or failure to trigger.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If one job blocks the event thread accidentally, other jobs will also be affected, leading to delayed triggering or failure to trigger.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;In ElasticJob 3.0.2, the callback method of each job will be executed on the thread to which the job belongs, preventing jobs from affecting each other.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Moreover, &lt;strong&gt;ElasticJob 3.0.2 modified the code which may lead to an endless loop.&lt;/strong&gt; When a job instance is online and the server nodes in ZooKeeper change (such as being deleted), ElasticJob is stuck in determining whether the server is enabled. This problem has been tackled.&lt;/p&gt;

&lt;h3&gt;
  
  
  Java 19 Support
&lt;/h3&gt;

&lt;p&gt;Java 19 was released on September 20, 2022. ElasticJob’s code closely follows Java versions and currently can be built and used through Java 19. &lt;strong&gt;Elasticjob now supports Java 8 through 19.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Release Notes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Bug Fixes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Fix itemErrorMessages not cleared after the job finished.&lt;/li&gt;
&lt;li&gt;Fix Curator notify thread may be blocked and avoid probably endless loop in ServerService.&lt;/li&gt;
&lt;li&gt;Fix the problem that NPE may occur in the deserialization of job instance ID and job configuration.&lt;/li&gt;
&lt;li&gt;Fix failover to sensitive.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Enhancements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Script Job exception’s stack was ignored.&lt;/li&gt;
&lt;li&gt;Support using different event trace data sources when using Spring Boot.&lt;/li&gt;
&lt;li&gt;Supports building projects with Java 19.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Please refer to &lt;a href="https://github.com/apache/shardingsphere-elasticjob/milestone/5?closed=1"&gt;GitHub ElasticJob Milestone&lt;/a&gt; for details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Useful Links
&lt;/h2&gt;

&lt;p&gt;🔗&lt;a href="https://github.com/apache/shardingsphere-elasticjob/milestone/5?closed=1"&gt; ElasticJob Milestone&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/apache/shardingsphere-elasticjob/blob/master/RELEASE-NOTES.md"&gt;Release Notes&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/apache/shardingsphere-elasticjob"&gt;ShardingSphere ElasticJob GitHub Address&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/apache/shardingsphere"&gt;ShardingSphere Project Address&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://shardingsphere.apache.org/elasticjob"&gt;ShardingSphere ElasticJob Official Website&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Author
&lt;/h2&gt;

&lt;p&gt;Wu Weijie, is an infrastructure R&amp;amp;D engineer at &lt;a href="https://www.sphere-ex.com/"&gt;SphereEx&lt;/a&gt;, and an Apache ShardingSphere PMC.&lt;/p&gt;

&lt;p&gt;He now focuses on the R&amp;amp;D of the Apache ShardingSphere access port and the sub-project ElasticJob.&lt;/p&gt;

</description>
      <category>java</category>
      <category>opensource</category>
      <category>database</category>
      <category>github</category>
    </item>
    <item>
      <title>ShardingSphere 5.2.1 is released — Here are the highlights</title>
      <dc:creator>Apache ShardingSphere</dc:creator>
      <pubDate>Thu, 27 Oct 2022 06:15:25 +0000</pubDate>
      <link>https://dev.to/apache_shardingsphere/shardingsphere-521-is-released-here-are-the-highlights-259m</link>
      <guid>https://dev.to/apache_shardingsphere/shardingsphere-521-is-released-here-are-the-highlights-259m</guid>
      <description>&lt;p&gt;Our new 5.2.1 release brings new features and enhancements such as the ShardingSphere system database, SQL HINT mandatory sharding routing, asynchronous data consistency check, and support for Consul and Nacos governance center.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Following 1.5 months in development, &lt;a href="https://shardingsphere.apache.org/"&gt;Apache&lt;br&gt;
ShardingSphere&lt;/a&gt; 5.2.1 is released. Our community merged 614 PRs from teams and individuals around the world. The &lt;a href="https://github.com/apache/shardingsphere/blob/master/RELEASE-NOTES.md"&gt;resulting 5.2.1 release&lt;/a&gt; has been optimized in terms of features, performance, testing, documentation, examples, etc.&lt;/p&gt;

&lt;p&gt;The new release brings the following highlights:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ShardingSphere system database&lt;/li&gt;
&lt;li&gt;SQL HINT mandatory sharding routing&lt;/li&gt;
&lt;li&gt;Asynchronous data consistency check&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Version 5.2.1 introduces a new ShardingSphere system database to provide statistical information for distributed databases. &lt;strong&gt;The statistical information can help the SQL Federation execution engine evaluate the execution costs to select an execution plan with the lowest costs.&lt;/strong&gt; Moreover, ShardingSphere can collect the distribution information after data sharding, providing references for automatic sharding scaling management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SQL HINT is an important feature provided by ShardingSphere 5. x, with which users can flexibly control routes through SQL annotations.&lt;/strong&gt; The new release enhances the SQL HINT capability and supports data-sharding mandatory routing.&lt;/p&gt;

&lt;p&gt;Users only need to add annotations in the format: &lt;code&gt;/* SHARDINGSPHERE_HINT: t_order.SHARDING_DATABASE_VALUE=1, t_order.SHARDING_TABLE_VALUE=1 */&lt;/code&gt; before transferring sharding values to the ShardingSphere routing engine. Compared with the former HintManager method, SQL HINT is more flexible without requiring users to write code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The new version also enhances the data consistency check capability during data migration, enabling asynchronous execution of the check. Users can view the data migration progress through &lt;a href="https://medium.com/nerd-for-tech/intro-to-distsql-an-open-source-more-powerful-sql-bada4099211"&gt;DistSQL&lt;/a&gt;, which improves the ease of data migration.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Moreover, the &lt;strong&gt;governance center now supports Consul and Nacos&lt;/strong&gt;, providing more choices for users. SQL compatibility is greatly improved. Read/write splitting has also been optimized, as Proxy can be started forcibly by using the &lt;code&gt;-f&lt;/code&gt; parameter after the secondary database is disabled.&lt;/p&gt;

&lt;p&gt;Next, this post will introduce the updates of ShardingSphere 5.2.1 in detail.&lt;/p&gt;
&lt;h2&gt;
  
  
  Notable Highlights
&lt;/h2&gt;

&lt;p&gt;ShardingSphere System Database Like &lt;a href="https://www.mysql.com/"&gt;MySQL&lt;/a&gt; and &lt;a href="https://www.postgresql.org/"&gt;PostgreSQL&lt;/a&gt; system databases, ShardingSphere 5.2.1 introduces a new system database to manage the metadata of databases.&lt;/p&gt;

&lt;p&gt;ShardingSphere system database mainly stores dynamic and static metadata. Dynamic metadata refers to frequently updated data, such as statistical information from distributed databases, which needs to be collected and maintained regularly through built-in scheduling tasks.&lt;/p&gt;

&lt;p&gt;In comparison, static metadata refers to data that will remain the same without user modification. An example is the status parameter of the ShardingSphere distributed database set by users, which only needs to be stored in the metadata database for querying.&lt;/p&gt;

&lt;p&gt;As this function is currently in the experimental phase, you need to start collecting metadata through the following configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;proxy-metadata-collector-enabled: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Connect Proxy through the command line and then execute the &lt;code&gt;SHOW DATABASES&lt;/code&gt; statement. We can see the new &lt;code&gt;shardingsphere&lt;/code&gt; system database which stores the metadata of distributed databases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--44J6WlmE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1fcour1za0f8gp261w4u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--44J6WlmE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1fcour1za0f8gp261w4u.png" alt="Image description" width="720" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Currently, the &lt;code&gt;sharding_table_statistics&lt;/code&gt; table is added to the &lt;code&gt;shardingsphere&lt;/code&gt; system database, used for collecting the distribution information of sharding tables, including &lt;code&gt;row_count&lt;/code&gt; and &lt;code&gt;size&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FSDn3woZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ea8pr8n44cobyu7eppb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FSDn3woZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ea8pr8n44cobyu7eppb.png" alt="Image description" width="720" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Statistical information provided by the ShardingSphere system database can help the SQL Federation execution engine evaluate the execution cost. This allows for the selection of a suitable association order and method and achieves efficient execution.&lt;/p&gt;

&lt;p&gt;Moreover, by collecting the data distribution information and loading information of storage nodes, ShardingSphere can carry out automatic sharding scaling, reducing the operation &amp;amp; maintenance costs for users.&lt;/p&gt;

&lt;h3&gt;
  
  
  SQL HINT Mandatory Sharding Routing
&lt;/h3&gt;

&lt;p&gt;In some special business scenarios, the fields used for sharding exist in external business logic rather than SQL, database, and table structures. Currently, &lt;code&gt;Hint&lt;/code&gt; is needed to introduce sharding key value to complete sharding routing.&lt;/p&gt;

&lt;p&gt;Before version 5.2.1, there were two ways to use &lt;code&gt;Hint&lt;/code&gt;. One way is to use it through HintManager in the JDBC access port, while another way is to start the &lt;code&gt;proxy-hint-enabled&lt;/code&gt; parameter in the Proxy access port.&lt;/p&gt;

&lt;p&gt;In the first case, users need to write codes to call the &lt;code&gt;addDatabaseShardingValue&lt;/code&gt; and &lt;code&gt;addTableShardingValue&lt;/code&gt; methods to set the values for the database and table sharding. However, if the HintManager method is used, users have to modify the original logic at some cost.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Sharding database and table with using HintManagerString sql = "SELECT * FROM t_order";
try (HintManager hintManager = HintManager.getInstance();
     Connection conn = dataSource.getConnection();
     PreparedStatement preparedStatement = conn.prepareStatement(sql)) {
    hintManager.addDatabaseShardingValue("t_order", 1);
    hintManager.addTableShardingValue("t_order", 2);
    try (ResultSet rs = preparedStatement.executeQuery()) {
        while (rs.next()) {
            // ...        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If it’s the second case, start the &lt;code&gt;proxy-hint-enabled&lt;/code&gt; parameter first, and then clear the sharding values through the following &lt;a href="https://shardingsphere.apache.org/document/5.1.0/en/concepts/distsql/"&gt;DistSQL&lt;/a&gt; configuration.&lt;/p&gt;

&lt;p&gt;Nevertheless, the Hint-based method will transform Proxy's thread processing model from IO multiplexing to one independent thread for each request, which will reduce Proxy's throughput. This means that users have to make tradeoffs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- For the current connection, add sharding values yy to table xx. xx: logical table name; yy: database sharding value
ADD SHARDING HINT DATABASE_VALUE t_order= 100;
-- For the current connection, add sharding values yy to table xx. xx: logical table name; yy: table sharding value
ADD SHARDING HINT TABLE_VALUE t_order = 100;
-- For the current connection, clear sharding hint settings
CLEAR SHARDING HINT;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As there are problems with both methods, version 5.2.1 adds the SQL Hint mandatory sharding routing feature. This allows users to control routes flexibly through SQL annotations. There’s no need to modify the logic of the original code and the thread processing model in the Proxy access port is not affected.&lt;/p&gt;

&lt;p&gt;Before using the SQL Hint mandatory sharding routing, users should enable configurations of parsing annotations in advance and set &lt;code&gt;sqlCommentParseEnabled&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt;. The annotation format only supports &lt;code&gt;/* */&lt;/code&gt; and content has to start with &lt;code&gt;SHARDINGSPHERE_HINT:&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Optional properties include:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{table}.SHARDING_DATABASE_VALUE:&lt;/code&gt; Used to add data source sharding key value corresponding to &lt;code&gt;{table}&lt;/code&gt;. Multiple properties are separated by a comma.&lt;br&gt;
&lt;code&gt;{table}.SHARDING_TABLE_VALUE:&lt;/code&gt; Used to add table sharding key value corresponding to &lt;code&gt;{table}&lt;/code&gt;. Multiple properties are separated by a comma.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When database sharding is required and table sharding is not, and mandatory routing is carried out to a sub-database, you can use &lt;code&gt;SHARDING_DATABASE_VALUE&lt;/code&gt; to add shards without specifying {table}.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The following is an example of using the &lt;code&gt;HINT_INLINE&lt;/code&gt; algorithm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rules:
- !SHARDING
  tables:
    t_order:
      actualDataNodes: ds_${0..1}.t_order_${0..1}
      databaseStrategy:
        hint:
          shardingAlgorithmName: database_hint_inline
      tableStrategy:
        hint:
          shardingAlgorithmName: t_order_hint_inline
  defaultDatabaseStrategy:
    none:
  defaultTableStrategy:
    none:
  shardingAlgorithms:
    database_hint_inline:
      type: HINT_INLINE
      props:
        algorithm-expression: ds_${value % 2}
    t_order_hint_inline:
      type: HINT_INLINE
      props:
        algorithm-expression: t_order_${value % 2}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can transfer the sharding value of the &lt;code&gt;t_order&lt;/code&gt; table through SQL Hint. We can see from the &lt;code&gt;PREVIEW&lt;/code&gt; statement that although SQL statements have no specified sharding key, the sharding key-value pair introduced by SQL Hint can also achieve sharding routing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* SHARDINGSPHERE_HINT: t_order.SHARDING_DATABASE_VALUE=1, t_order.SHARDING_TABLE_VALUE=1 */
SELECT * FROM t_order;
PREVIEW /* SHARDINGSPHERE_HINT: t_order.SHARDING_DATABASE_VALUE=1, t_order.SHARDING_TABLE_VALUE=1 */ SELECT * FROM t_order;
+------------------+----------------------------------------------------------------------------------------------------------------------+
| data_source_name | actual_sql                                                                                                           |
+------------------+----------------------------------------------------------------------------------------------------------------------+
| ds_1             | /* SHARDINGSPHERE_HINT: t_order.SHARDING_DATABASE_VALUE=1, t_order.SHARDING_TABLE_VALUE=1 */ SELECT * FROM t_order_1 |
+------------------+----------------------------------------------------------------------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Asynchronous data consistency check
&lt;/h3&gt;

&lt;p&gt;Before version 5.2.1, users had to wait for the server to return check results synchronously. Sometimes the check was time-consuming, so timeout could occur in the &lt;code&gt;session&lt;/code&gt; of database servers.&lt;/p&gt;

&lt;p&gt;In such cases, users couldn’t observe the result and could only check logs, which were not user-friendly. In response to the problem, version 5.2.1 supports asynchronous data consistency check capability as well as a set of DistSQL as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CHECK MIGRATION jobId — Asynchronous data consistency check&lt;/li&gt;
&lt;li&gt;SHOW MIGRATION CHECK STATUS jobId — Query the check progress&lt;/li&gt;
&lt;li&gt;START MIGRATION CHECK jobId — Start the check job&lt;/li&gt;
&lt;li&gt;STOP MIGRATION CHECK jobId — Stop the check job&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Through the above DistSQL, users can manage the data migration more easily. Examples are as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Execute consistency check
mysql&amp;gt; CHECK MIGRATION 'j0101270cbb6714cff0d7d13db9fa23c7c0a1' BY TYPE (NAME='DATA_MATCH');
Query OK, 0 rows affected (0.24 sec)
-- Query the check progress
mysql&amp;gt; SHOW MIGRATION CHECK STATUS 'j0101270cbb6714cff0d7d13db9fa23c7c0a1';
+---------+--------+---------------------+-------------------+-------------------------+----------------+------------------+---------------+
| tables  | result | finished_percentage | remaining_seconds | check_begin_time        | check_end_time | duration_seconds | error_message |
+---------+--------+---------------------+-------------------+-------------------------+----------------+------------------+---------------+
| t_order | false  | 0                   | 2450              | 2022-10-12 16:07:17.082 |                | 14               |               |
+---------+--------+---------------------+-------------------+-------------------------+----------------+------------------+---------------+
1 row in set (0.02 sec)
-- Stop the check job
mysql&amp;gt; STOP MIGRATION CHECK 'j0101270cbb6714cff0d7d13db9fa23c7c0a1';
Query OK, 0 rows affected (0.06 sec)
-- Start the check job
mysql&amp;gt; START MIGRATION CHECK 'j0101270cbb6714cff0d7d13db9fa23c7c0a1';
Query OK, 0 rows affected (5.13 sec)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please refer to the &lt;a href="https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/migration/usage/"&gt;official documentation&lt;/a&gt; and stay tuned for future posts with more detailed information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enhancements
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Kernel
&lt;/h3&gt;

&lt;p&gt;This update optimized the SQL parsing of &lt;a href="https://www.oracle.com/index.html"&gt;Oracle&lt;/a&gt; and &lt;a href="https://www.postgresql.org/"&gt;PostgreSQL&lt;/a&gt;, greatly improving the SQL compatibility of ShardingSphere. Detailed SQL parsing optimization can be found in the release notes section below.&lt;/p&gt;

&lt;p&gt;It’s a long-term mission for the ShardingSphere community to improve SQL parsing compatibility, and anyone interested is welcome to join us.&lt;/p&gt;

&lt;p&gt;Version 5.2.1 also supports read/write splitting. After the secondary database is disabled, &lt;a href="https://shardingsphere.apache.org/document/current/en/quick-start/shardingsphere-proxy-quick-start/"&gt;ShardingSphere-Proxy&lt;/a&gt; can be started forcibly by using the &lt;code&gt;-f&lt;/code&gt; parameter.&lt;/p&gt;

&lt;p&gt;In this case, the Proxy can still be started for ops &amp;amp; maintenance work even if part of the storage nodes are unavailable. Moreover, the kernel function has been optimized in terms of the display results of the &lt;code&gt;SHOW PROCESSLIST&lt;/code&gt; statement and added thread display of the &lt;code&gt;Sleep&lt;/code&gt; status.&lt;/p&gt;

&lt;p&gt;By optimizing the logic of unicast routing and reusing cached database connections as much as possible, ShardingSphere's execution performance is improved.&lt;/p&gt;

&lt;h3&gt;
  
  
  Elastic scaling
&lt;/h3&gt;

&lt;p&gt;The elastic scaling module only supports data migration for tables with a unique key, and tables are allowed to migrate to new table names. Moreover, it optimized the use of thread pools in data migration. The data consistency check can be interrupted, which makes it easier for users to manage data migration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distributed Governance
&lt;/h3&gt;

&lt;p&gt;In terms of distributed governance, the governance center is now able to support Consul and Nacos, providing you with more choices. At the same time, the built-in H2 database is used to support persisting metadata information in a standalone pattern.&lt;/p&gt;

&lt;p&gt;In the running mode, the &lt;code&gt;overwrite&lt;/code&gt; configuration item is removed. Metadata information is based on the data in the governance center, and users can dynamically change the rules and configurations through DistSQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Release Notes
&lt;/h2&gt;

&lt;p&gt;In the following sections, you will find the release notes of ShardingSphere 5.2.1. To improve user experience, the API of some functions is modified in this update. Please refer to the API changes section below for details.&lt;/p&gt;

&lt;h3&gt;
  
  
  New Features
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Add ShardingSphere default system database to support global metadata management&lt;/li&gt;
&lt;li&gt;Support asynchronous data consistency check&lt;/li&gt;
&lt;li&gt;Added support for Consul governance center&lt;/li&gt;
&lt;li&gt;Added support for Nacos governance center&lt;/li&gt;
&lt;li&gt;Added support for the view function in the governance center&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Enhancements
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;SQL Federation engine adds ADVANCED executor and adapts to openGauss database&lt;/li&gt;
&lt;li&gt;Support ShardingSphere Proxy startup after read-write splitting read database is disabled&lt;/li&gt;
&lt;li&gt;SQL HINT supports force sharding route&lt;/li&gt;
&lt;li&gt;Show processlist supports showing Proxy connections (sleep, active)&lt;/li&gt;
&lt;li&gt;Optimized ShardingSphere-JDBC data source configuration error message&lt;/li&gt;
&lt;li&gt;ShardingSphere-JDBC supports SpringBoot 3. x version&lt;/li&gt;
&lt;li&gt;Support load MySQL, PostgreSQL, openGauss, and SQLServer view metadata&lt;/li&gt;
&lt;li&gt;Update Snakeyaml to 1.33 and open YAML 3MB limit&lt;/li&gt;
&lt;li&gt;Reuse cached connections as possible when unicast sharding&lt;/li&gt;
&lt;li&gt;Support Parsing ALTER ROLE in Oracle&lt;/li&gt;
&lt;li&gt;Add support of ALTER RESOURCE COST for Oracle&lt;/li&gt;
&lt;li&gt;Support parsing Drop Materialized View in Oracle&lt;/li&gt;
&lt;li&gt;Support parsing DROP LIBRARY in Oracle&lt;/li&gt;
&lt;li&gt;Support parsing DROP JAVA in Oracle&lt;/li&gt;
&lt;li&gt;Support parsing DROP PLUGGABLE DATABASE in Oracle&lt;/li&gt;
&lt;li&gt;Support parsing DROP INDEX TYPE in Oracle&lt;/li&gt;
&lt;li&gt;Support Parsing ALTER PACKAGE in openGauss&lt;/li&gt;
&lt;li&gt;Support openGauss select offset, count statement parse, and remove useless syntax in PostgreSQL grammar&lt;/li&gt;
&lt;li&gt;Add max_size to openGauss syntax&lt;/li&gt;
&lt;li&gt;Optimize alter view/drop view parse logic and fix alter view refresher bug&lt;/li&gt;
&lt;li&gt;Add SQL parser error detail to ParseCancellationException&lt;/li&gt;
&lt;li&gt;Add support for parse OptOnConflict for Postgres&lt;/li&gt;
&lt;li&gt;Enhance support for ALTER TABLE and ALTER VIEW in PostgreSQL&lt;/li&gt;
&lt;li&gt;Add a missing keyword in the PostgreSQL Declare Statement&lt;/li&gt;
&lt;li&gt;Add JSON function support to MySQL parser&lt;/li&gt;
&lt;li&gt;ShardingSphere-Proxy automatically adapts to cgroup memory limits in the Docker environment&lt;/li&gt;
&lt;li&gt;Show migration status DistSQL respond new added error_message column&lt;/li&gt;
&lt;li&gt;Show migration status respond new added processed_records_count column&lt;/li&gt;
&lt;li&gt;Support MySQL 8 caching_sha2_password authentication in incremental dump&lt;/li&gt;
&lt;li&gt;Improve drop pipeline process configuration&lt;/li&gt;
&lt;li&gt;Support unique key table migration&lt;/li&gt;
&lt;li&gt;Support migrating table to new name table&lt;/li&gt;
&lt;li&gt;Improve thread pool usage in pipeline task and runner&lt;/li&gt;
&lt;li&gt;Support cancelable data consistency check&lt;/li&gt;
&lt;li&gt;DistSQL: When creating or altering the readwrite-splitting rule, check duplicate write or read resources&lt;/li&gt;
&lt;li&gt;DistSQL: Add validity check for &lt;code&gt;ALTER SHARDING BINDING TABLE RULES&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Standalone mode H2 supports persistent metadata&lt;/li&gt;
&lt;li&gt;Fix openGauss cursor execution in xa transaction&lt;/li&gt;
&lt;li&gt;Added transaction-related exceptionsBug Fix&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Bug Fixes
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Generate proper placeholder for PostgreSQL when rewriting&lt;/li&gt;
&lt;li&gt;Fix opengauss update set parse exception&lt;/li&gt;
&lt;li&gt;Fix parse exception when executing insert statement with a negative value&lt;/li&gt;
&lt;li&gt;Fix wrong connectDescriptorUrlPattern in OracleDataSourceMetaData&lt;/li&gt;
&lt;li&gt;Fix insert SQL garbled after sharding rewrote in special rules&lt;/li&gt;
&lt;li&gt;Fix exception when execute select * from information_schema.tables&lt;/li&gt;
&lt;li&gt;Fix exception when executing alter view rename&lt;/li&gt;
&lt;li&gt;Fix PostgreSQL check data source permission when using rolsuper&lt;/li&gt;
&lt;li&gt;DistSQL: fix NPE for &lt;code&gt;REFRESH TABLE METADATA&lt;/code&gt; when there is no resource&lt;/li&gt;
&lt;li&gt;Fix Unmodified table metadata when modifying rules&lt;/li&gt;
&lt;li&gt;Fix database discovery&lt;/li&gt;
&lt;li&gt;The MySQL.NORMAL_REPLICATION algorithm cannot find the primary node”&lt;/li&gt;
&lt;li&gt;Fixed using etcd to build a cluster event not aware&lt;/li&gt;
&lt;li&gt;Fix NPE occurred when transaction management is not created&lt;/li&gt;
&lt;li&gt;Fix routing exception that occurs when the sharding value of the InlineShardingAlgorithm algorithm exceeds Integer&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  API Changes
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;SQL HINT syntax format adjust to SQL-style format&lt;/li&gt;
&lt;li&gt;DistSQL: Remove syntax &lt;code&gt;COUNT DATABASE RULES&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;ShardingSphere mode removes overwrite configuration&lt;/li&gt;
&lt;li&gt;Agent: Optimize configuration of agent.yaml&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Relevant Links
&lt;/h2&gt;

&lt;p&gt;🔗 &lt;a href="https://shardingsphere.apache.org/document/current/en/downloads/"&gt;&lt;em&gt;Download Link&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/apache/shardingsphere/blob/master/RELEASE-NOTES.md"&gt;&lt;em&gt;Release Notes&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://shardingsphere.apache.org/"&gt;&lt;em&gt;Project Address&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/apache/shardingsphere-on-cloud"&gt;&lt;em&gt;Cloud Sub-project Address&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Community Contribution
&lt;/h2&gt;

&lt;p&gt;This Apache ShardingSphere 5.2.1 release is the result of 614 merged PRs, committed by 38 contributors. Thank you for your efforts!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--grwary-x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aw16k33t8dks8cljvums.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--grwary-x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aw16k33t8dks8cljvums.png" alt="Image description" width="720" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>database</category>
      <category>opensource</category>
      <category>saas</category>
      <category>cloud</category>
    </item>
    <item>
      <title>ShardingSphere 5.2.0: Audit for sharding intercepts unreasonable requests in multi-shards scenarios</title>
      <dc:creator>Apache ShardingSphere</dc:creator>
      <pubDate>Mon, 17 Oct 2022 06:55:38 +0000</pubDate>
      <link>https://dev.to/apache_shardingsphere/shardingsphere-520-audit-for-sharding-intercepts-unreasonable-requests-in-multi-shards-scenarios-5958</link>
      <guid>https://dev.to/apache_shardingsphere/shardingsphere-520-audit-for-sharding-intercepts-unreasonable-requests-in-multi-shards-scenarios-5958</guid>
      <description>&lt;h2&gt;
  
  
  1. Background
&lt;/h2&gt;

&lt;p&gt;Thanks to our continuous review of the &lt;a href="https://shardingsphere.apache.org/"&gt;ShardingSphere&lt;/a&gt;’s community feedback that we use to develop features such as data sharding and read/write splitting, we found that some users create a large number of shards when using the data sharding feature.&lt;/p&gt;

&lt;p&gt;In such cases, there can be 1,000 physical tables corresponding to a sharding logical table, which largely disturbs users.&lt;/p&gt;

&lt;p&gt;For instance, a &lt;code&gt;SELECT * FROM t_order&lt;/code&gt; statement will lead to a full-route, which is obviously not the case for &lt;a href="https://shardingsphere.apache.org/blog/en/material/2022_04_26_how_to_use_shardingsphere-proxy_in_real_production_scenarios_your_quick_start_guide/"&gt;OLTP&lt;/a&gt;. This SQL can be placed in another Proxy to avoid blocking other requests.&lt;/p&gt;

&lt;p&gt;However, if users are not familiar with Proxy, or write a &lt;code&gt;where&lt;/code&gt; condition and don't know that sharding is not supported in this condition, a full-route is still required.&lt;/p&gt;

&lt;p&gt;A full route can lower the performance of Proxy and even result in the failure of a reasonable request. Imagine that there are 1000 shards in a physical database, if they are executed in parallel, 1,000 connections are needed — and if in serial, the request can lead to a timeout. In this regard, community users requested whether the unreasonable request can be intercepted directly.&lt;/p&gt;

&lt;p&gt;We’ve considered the issue for a while. If we simply block the full-route operation, we just need to check it in the code and add a switch to the configuration file. On the other hand, if the user later needs to set a table to read-only or requires the update operation to carry a &lt;code&gt;limit&lt;/code&gt;, does that mean we need to change the code and configuration again? This obviously goes against the pluggable logic of Proxy.&lt;/p&gt;

&lt;p&gt;In response to the above problems, the &lt;a href="https://faun.pub/apache-shardingsphere-5-2-0-is-released-bringing-new-cloud-native-possibilities-8d674d964a93?source=your_stories_page-------------------------------------"&gt;recently released Apache ShardingSphere 5.2.0&lt;/a&gt; provides users with SQL audit for the sharding function. The audit can either be an interception operation or a statistical operation. Similar to the sharding and unique key generation algorithms, the audit algorithm is also plugin-oriented, user-defined, and configurable.&lt;/p&gt;

&lt;p&gt;Next, we will elaborate on the implementation logic for data sharding’s audit, with specific SQL examples.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Audit for sharding interface
&lt;/h2&gt;

&lt;p&gt;The entrance to Apache ShardingSphere’s audit is in the &lt;code&gt;org.apache.shardingsphere.infra.executor.check.SQLCheckEngine&lt;/code&gt; class, which will invoke the &lt;code&gt;check&lt;/code&gt; method of the &lt;code&gt;SQLChecker&lt;/code&gt; interface. Currently, ShardingSphere audit contains audit for permission (verify username and password) and audit for sharding.&lt;/p&gt;

&lt;p&gt;Here we focus on the parent interface implemented in &lt;code&gt;ShardingAuditChecker&lt;/code&gt; of audit for sharding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Xw-viFkB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jo34img836ev0ov67y04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xw-viFkB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jo34img836ev0ov67y04.png" alt="Image description" width="720" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can learn its working principles quickly through viewing the &lt;code&gt;check&lt;/code&gt; code of &lt;code&gt;org.apache.shardingsphere.sharding.checker.audit.ShardingAuditChecker&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface ShardingAuditAlgorithm extends ShardingSphereAlgorithm {

    /**
     * Sharding audit algorithm SQL check.
     *
     * @param sqlStatementContext SQL statement context
     * @param parameters SQL parameters
     * @param grantee grantee
     * @param database database
     * @return SQL check result
     */
    SQLCheckResult check(SQLStatementContext&amp;lt;?&amp;gt; sqlStatementContext, List&amp;lt;Object&amp;gt; parameters, Grantee grantee, ShardingSphereDatabase database);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This method obtains the audit strategies of all the sharding tables involved and invokes the audit algorithms configured in each sharding table audit strategy. If an audit algorithm fails to pass, an exception is displayed to the user.&lt;/p&gt;

&lt;p&gt;Some users may wonder what &lt;code&gt;disableAuditNames&lt;/code&gt; does here. The audit for sharding also allows users to skip this process. In some cases, users may need to execute SQL that should have been blocked by the audit, and they are aware of the impact of this SQL.&lt;/p&gt;

&lt;p&gt;For this reason, we provide &lt;code&gt;Hint: disableAuditNames&lt;/code&gt; to skip audit interception, which will be described with practical examples later on. The Proxy Administrators can configure &lt;code&gt;allowHintDisable&lt;/code&gt; to control whether to allow users to skip this process. The default value is &lt;code&gt;true&lt;/code&gt;, indicating that Hint-based skip is allowed.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Audit for sharding algorithm
&lt;/h2&gt;

&lt;p&gt;The audit for sharding algorithm interface &lt;code&gt;org.apache.shardingsphere.sharding.spi.ShardingAuditAlgorithm&lt;/code&gt; is inherited from SPI class &lt;code&gt;ShardingSphereAlgorithm&lt;/code&gt;. It inherits &lt;code&gt;type&lt;/code&gt; and &lt;code&gt;props&lt;/code&gt; properties and defines its own &lt;code&gt;check&lt;/code&gt; method. If you‘re looking to customize your own audit algorithm, just implement the interface and add it to &lt;code&gt;INF.services&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8RJ5WxUK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w7ed5puf7dmypkzzxr6n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8RJ5WxUK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w7ed5puf7dmypkzzxr6n.png" alt="Image description" width="570" height="142"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface ShardingAuditAlgorithm extends ShardingSphereAlgorithm {

    /**
     * Sharding audit algorithm SQL check.
     *
     * @param sqlStatementContext SQL statement context
     * @param parameters SQL parameters
     * @param grantee grantee
     * @param database database
     * @return SQL check result
     */
    SQLCheckResult check(SQLStatementContext&amp;lt;?&amp;gt; sqlStatementContext, List&amp;lt;Object&amp;gt; parameters, Grantee grantee, ShardingSphereDatabase database);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apache ShardingSphere implements a general audit for sharding algorithm &lt;code&gt;org.apache.shardingsphere.sharding.algorithm.audit.DMLShardingConditionsShardingAuditAlgorithm&lt;/code&gt;, namely the above-mentioned SQL statement that intercepts the full-route.&lt;/p&gt;

&lt;p&gt;The algorithm makes decisions by determining whether the sharding condition is &lt;code&gt;null&lt;/code&gt;. Of course, it won't intercept broadcast tables and non-sharding tables.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public final class DMLShardingConditionsShardingAuditAlgorithm implements ShardingAuditAlgorithm {

    @Getter
    private Properties props;

    @Override
    public void init(final Properties props) {
        this.props = props;
    }

    @SuppressWarnings({"rawtypes", "unchecked"})
    @Override
    public SQLCheckResult check(final SQLStatementContext&amp;lt;?&amp;gt; sqlStatementContext, final List&amp;lt;Object&amp;gt; parameters, final Grantee grantee, final ShardingSphereDatabase database) {
        if (sqlStatementContext.getSqlStatement() instanceof DMLStatement) {
            ShardingRule rule = database.getRuleMetaData().getSingleRule(ShardingRule.class);
            if (rule.isAllBroadcastTables(sqlStatementContext.getTablesContext().getTableNames())
                    || sqlStatementContext.getTablesContext().getTableNames().stream().noneMatch(rule::isShardingTable)) {
                return new SQLCheckResult(true, "");
            }
            ShardingConditionEngine shardingConditionEngine = ShardingConditionEngineFactory.createShardingConditionEngine(sqlStatementContext, database, rule);
            if (shardingConditionEngine.createShardingConditions(sqlStatementContext, parameters).isEmpty()) {
                return new SQLCheckResult(false, "Not allow DML operation without sharding conditions");
            }
        }
        return new SQLCheckResult(true, "");
    }

    @Override
    public String getType() {
        return "DML_SHARDING_CONDITIONS";
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we’d like to introduce another audit for sharding algorithm: &lt;code&gt;LimitRequiredShardingAuditAlgorithm&lt;/code&gt;. This algorithm can intercept SQL without carrying &lt;code&gt;limit&lt;/code&gt; in the &lt;code&gt;update&lt;/code&gt; and &lt;code&gt;delete&lt;/code&gt; operations.&lt;/p&gt;

&lt;p&gt;As this algorithm is less universal, it is not currently integrated into Apache ShardingSphere. As you can see, it is very easy to implement a custom algorithm, which is why we need to design the audit for sharding framework. Thanks to its plugin-oriented architecture, ShardingSphere boasts great scalability.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public final class LimitRequiredShardingAuditAlgorithm implements ShardingAuditAlgorithm {

    @Getter
    private Properties props;

    @Override
    public void init(final Properties props) {
        this.props = props;
    }

    @SuppressWarnings({"rawtypes", "unchecked"})
    @Override
    public SQLCheckResult check(final SQLStatementContext&amp;lt;?&amp;gt; sqlStatementContext, final List&amp;lt;Object&amp;gt; parameters, final Grantee grantee, final ShardingSphereDatabase database) {
        if (sqlStatementContext instanceof UpdateStatementContext &amp;amp;&amp;amp; !((MySQLUpdateStatement) sqlStatementContext.getSqlStatement()).getLimit().isPresent()) {
            return new SQLCheckResult(false, "Not allow update without limit");
        }
        if (sqlStatementContext instanceof DeleteStatementContext &amp;amp;&amp;amp; !((MySQLDeleteStatement) sqlStatementContext.getSqlStatement()).getLimit().isPresent()) {
            return new SQLCheckResult(false, "Not allow delete without limit");
        }
        return new SQLCheckResult(true, "");
    }

    @Override
    public String getType() {
        return "LIMIT_REQUIRED";
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Using audit for sharding
&lt;/h2&gt;

&lt;p&gt;Audit for sharding requires you to configure audit strategy for logical tables. To help you quickly get started, its configuration is the same with that of the sharding algorithm and the sharding key value generator.&lt;/p&gt;

&lt;p&gt;There is an algorithm definition and strategy definition, and default audit strategy is also supported. If the audit strategy is configured in the logical table, it takes effect only for the logical table.&lt;/p&gt;

&lt;p&gt;If &lt;code&gt;defaultAuditStrategy&lt;/code&gt; is configured in the logical table, it takes effect fo all the logical tables under the sharding rule. &lt;code&gt;Auditors&lt;/code&gt; are similar to &lt;code&gt;ShardingAlgorithms&lt;/code&gt;, &lt;code&gt;auditStrategy&lt;/code&gt; to &lt;code&gt;databaseStrategy&lt;/code&gt;, and &lt;code&gt;defaultAuditStrategy&lt;/code&gt; to &lt;code&gt;defaultDatabaseStrategy&lt;/code&gt; or &lt;code&gt;defaultTableStrategy&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Please refer to the following configuration. Only the configuration of audit for sharding is displayed. You need to configure the sharding algorithm and data source by yourself.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rules:
  - !SHARDING
    tables:
      t_order:
        actualDataNodes: ds_${0..1}.t_order_${0..1}
        auditStrategy:
          auditorNames:
            - sharding_key_required_auditor
          allowHintDisable: true
    defaultAuditStrategy:
      auditorNames:
        - sharding_key_required_auditor
      allowHintDisable: true
    auditors:
      sharding_key_required_auditor:
        type: DML_SHARDING_CONDITIONS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Execute a query operation. An error is displayed as the audit strategy for intercepting the full-database route is configured.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql&amp;gt; select * from t_order;
ERROR 13000 (44000): SQL check failed, error message: Not allow DML operation without sharding conditions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Add &lt;code&gt;HINT&lt;/code&gt;. The name of the &lt;code&gt;HINT&lt;/code&gt; is &lt;code&gt;/* ShardingSphere hint: disableAuditNames */&lt;/code&gt;，and &lt;code&gt;disableAuditNames&lt;/code&gt; is followed by the &lt;code&gt;auditorsNames&lt;/code&gt; configured in the preceding command.&lt;/p&gt;

&lt;p&gt;If there are multiple names, separate them with spaces such as &lt;code&gt;/* ShardingSphere hint: disableAuditNames=auditName1 auditName2*/&lt;/code&gt;. After using HINT, we can see that the SQL operation is successfully executed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql&amp;gt; /* ShardingSphere hint: disableAuditNames=sharding_key_required_auditor */ select * from t_order;
+----------+---------+------------+--------+
| order_id | user_id | address_id | status |
+----------+---------+------------+--------+
|       30 |      20 |         10 | 20     |
|       32 |      22 |         10 | 20     |
+----------+---------+------------+--------+
2 rows in set (0.01 sec)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: &lt;code&gt;HINT&lt;/code&gt; requires you to modify the &lt;code&gt;server.yaml&lt;/code&gt; configuration of Proxy. In addition, if you are using MySQL terminal to connect to Proxy directly, you need to add the &lt;code&gt;-c&lt;/code&gt; property — otherwise, &lt;code&gt;HINT&lt;/code&gt; comments will be filtered out of the MySQL terminal and will not be parsed by Proxy on the backend.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rules:
  - !SQL_PARSER
    sqlCommentParseEnabled: true
    sqlStatementCache:
      initialCapacity: 2000
      maximumSize: 65535
    parseTreeCache:
      initialCapacity: 128
      maximumSize: 1024
props:
  proxy-hint-enabled: true
mysql -uroot -proot -h127.0.0.1 -P3307  -c
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. DistSQL with audit for sharding
&lt;/h2&gt;

&lt;p&gt;Currently, as you can see from the &lt;a href="https://github.com/apache/shardingsphere/releases/tag/5.2.0"&gt;release notes&lt;/a&gt; Apache ShardingSphere 5.2.0 supports the following &lt;a href="https://shardingsphere.apache.org/document/5.1.0/en/concepts/distsql/"&gt;DistSQL&lt;/a&gt; with audit for sharding function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE SHARDING AUDITOR
ALTER SHARDING AUDITOR
SHOW SHARDING AUDIT ALGORITHMS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following DistSQL will be supported in future releases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DROP SHARDING AUDITOR
SHOW UNUSED SHARDING AUDIT ALGORITHMS
CREATE SHARDING TABLE RULE # including AUDIT_STRATEGY
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This post introduced how audit for sharding works with specific examples. I believe you already have basic understanding of this function, and you can use it whenever you need or use custom algorithm.&lt;/p&gt;

&lt;p&gt;You are also welcome to submit general algorithms to the community. If you have any ideas you’d like to contribute or you encounter any issues with your ShardingSphere, feel free to post them on &lt;a href="https://github.com/apache/shardingsphere"&gt;Github&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Author
&lt;/h2&gt;

&lt;p&gt;Huang Ting, a technology engineer at &lt;a href="https://www.tencent.com/en-us/"&gt;Tencent&lt;/a&gt; Financial Technology (FiT) &amp;amp; &lt;a href="https://shardingsphere.apache.org/community/en/team/"&gt;ShardingSphere Committer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;He is mainly responsible for the R&amp;amp;D of Proxy-related audit for sharding and transaction features.&lt;/p&gt;

</description>
      <category>database</category>
      <category>saas</category>
      <category>opensource</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How does ShardingSphere’s Show processlist &amp; Kill Work?</title>
      <dc:creator>Apache ShardingSphere</dc:creator>
      <pubDate>Thu, 22 Sep 2022 10:30:31 +0000</pubDate>
      <link>https://dev.to/apache_shardingsphere/how-does-shardingspheres-show-processlist-kill-work-3mi</link>
      <guid>https://dev.to/apache_shardingsphere/how-does-shardingspheres-show-processlist-kill-work-3mi</guid>
      <description>&lt;p&gt;For those of you who often use databases, you may wonder:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;How do I check what SQL is currently being executed by the database, and in what state?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How do I terminate abnormal SQL? For instance, if a &lt;code&gt;SELECT&lt;/code&gt; statement used to query a table with massive data does not carry query conditions, it would drag down the performance of the entire database. This may push to want to terminate this abnormal SQL.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In response to the above issues, &lt;a href="https://shardingsphere.apache.org/" rel="noopener noreferrer"&gt;Apache ShardingSphere&lt;/a&gt; introduced functions such as &lt;code&gt;Show processlist&lt;/code&gt; and &lt;code&gt;Kill &amp;lt;processID&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2ixy7mypvgdxpmvprt9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2ixy7mypvgdxpmvprt9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;Show processlist&lt;/code&gt;: this command can display the list of SQL currently being executed by ShardingSphere and the execution progress of each SQL. If ShardingSphere is deployed in cluster mode, the &lt;code&gt;Show processlist&lt;/code&gt; function aggregates the SQL running for all Proxy instances in the cluster and then displays the result, so you can always see all the SQL running at that moment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql&amp;gt; show processlist \G;
*************************** 1. row ***************************
     Id: 82a67f254959e0a0807a00f3cd695d87
   User: root
   Host: 10.200.79.156
     db: root
Command: Execute
   Time: 19
  State: Executing 0/1
   Info: update t_order set version = 456
1 row in set (0.24 sec)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Kill &amp;lt;processID&amp;gt;&lt;/code&gt;: This command is implemented based on &lt;code&gt;Show processlist&lt;/code&gt; and can terminate the running SQL listed in the &lt;code&gt;Show processlist&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql&amp;gt; kill 82a67f254959e0a0807a00f3cd695d87;
Query OK, 0 rows affected (0.17 sec)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. How do they work?
&lt;/h2&gt;

&lt;p&gt;Now that you understand the functions of &lt;code&gt;Show processlist&lt;/code&gt; and &lt;code&gt;Kill &amp;lt;processID&amp;gt;&lt;/code&gt;, let's see how the two commands work. As the working principle behind &lt;code&gt;Kill &amp;lt;processID&amp;gt;&lt;/code&gt; is similar to that of &lt;code&gt;Show processlist&lt;/code&gt;, we'll focus on the interpretation of &lt;code&gt;Show processlist&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.1 How is SQL saved and destroyed?
&lt;/h3&gt;

&lt;p&gt;Each SQL executed in ShardingSphere will generate an &lt;code&gt;ExecutionGroupContext&lt;/code&gt; object. The object contains all the information about this SQL, among which there is an &lt;code&gt;executionID&lt;/code&gt; field to ensure its uniqueness.&lt;/p&gt;

&lt;p&gt;When ShardingSphere receives a SQL command, the &lt;code&gt;GovernanceExecuteProcessReporter# report&lt;/code&gt; is called to store &lt;code&gt;ExecutionGroupContext&lt;/code&gt; information into the cache of &lt;code&gt;ConcurrentHashMap&lt;/code&gt; (currently only DML and DDL statements of MySQL are supported; other types of databases will be supported in later versions. Query statements are also classified into DML).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public final class GovernanceExecuteProcessReporter implements ExecuteProcessReporter {

    @Override
    public void report(final QueryContext queryContext, final ExecutionGroupContext&amp;lt;? extends SQLExecutionUnit&amp;gt; executionGroupContext,
                       final ExecuteProcessConstants constants, final EventBusContext eventBusContext) {
        ExecuteProcessContext executeProcessContext = new ExecuteProcessContext(queryContext.getSql(), executionGroupContext, constants);
        ShowProcessListManager.getInstance().putProcessContext(executeProcessContext.getExecutionID(), executeProcessContext);
        ShowProcessListManager.getInstance().putProcessStatement(executeProcessContext.getExecutionID(), executeProcessContext.getProcessStatements());
    }
}
@NoArgsConstructor(access = AccessLevel.PRIVATE)
public final class ShowProcessListManager {

    private static final ShowProcessListManager INSTANCE = new ShowProcessListManager();

    @Getter
    private final Map&amp;lt;String, ExecuteProcessContext&amp;gt; processContexts = new ConcurrentHashMap&amp;lt;&amp;gt;();

    @Getter
    private final Map&amp;lt;String, Collection&amp;lt;Statement&amp;gt;&amp;gt; processStatements = new ConcurrentHashMap&amp;lt;&amp;gt;();

    public static ShowProcessListManager getInstance() {
        return INSTANCE;
    }

    public void putProcessContext(final String executionId, final ExecuteProcessContext processContext) {
        processContexts.put(executionId, processContext);
    }

    public void putProcessStatement(final String executionId, final Collection&amp;lt;Statement&amp;gt; statements) {
        if (statements.isEmpty()) {
            return;
        }
        processStatements.put(executionId, statements);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As shown above, the &lt;code&gt;ShowProcessListManager&lt;/code&gt; class has two cache Maps, namely &lt;code&gt;processContexts&lt;/code&gt; and &lt;code&gt;processStatements&lt;/code&gt;. The former stores the mapping between &lt;code&gt;executionID&lt;/code&gt; and &lt;code&gt;ExecuteProcessContext&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The latter contains the mapping between &lt;code&gt;executionID&lt;/code&gt; and &lt;code&gt;Statement objects&lt;/code&gt; that may generate multiple statements after the SQL is overwritten.&lt;/p&gt;

&lt;p&gt;Every time ShardingSphere receives a SQL statement, the SQL information will be cached into the two Maps. After SQL is executed, the cache of Map will be deleted.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@RequiredArgsConstructor
public final class ProxyJDBCExecutor {

    private final String type;

    private final ConnectionSession connectionSession;

    private final JDBCDatabaseCommunicationEngine databaseCommunicationEngine;

    private final JDBCExecutor jdbcExecutor;

    public List&amp;lt;ExecuteResult&amp;gt; execute(final QueryContext queryContext, final ExecutionGroupContext&amp;lt;JDBCExecutionUnit&amp;gt; executionGroupContext,
                                       final boolean isReturnGeneratedKeys, final boolean isExceptionThrown) throws SQLException {
        try {
            MetaDataContexts metaDataContexts = ProxyContext.getInstance().getContextManager().getMetaDataContexts();
            EventBusContext eventBusContext = ProxyContext.getInstance().getContextManager().getInstanceContext().getEventBusContext();
            ShardingSphereDatabase database = metaDataContexts.getMetaData().getDatabase(connectionSession.getDatabaseName());
            DatabaseType protocolType = database.getProtocolType();
            DatabaseType databaseType = database.getResource().getDatabaseType();
            ExecuteProcessEngine.initialize(queryContext, executionGroupContext, eventBusContext);
            SQLStatementContext&amp;lt;?&amp;gt; context = queryContext.getSqlStatementContext();
            List&amp;lt;ExecuteResult&amp;gt; result = jdbcExecutor.execute(executionGroupContext,
                    ProxyJDBCExecutorCallbackFactory.newInstance(type, protocolType, databaseType, context.getSqlStatement(), databaseCommunicationEngine, isReturnGeneratedKeys, isExceptionThrown,
                            true),
                    ProxyJDBCExecutorCallbackFactory.newInstance(type, protocolType, databaseType, context.getSqlStatement(), databaseCommunicationEngine, isReturnGeneratedKeys, isExceptionThrown,
                            false));
            ExecuteProcessEngine.finish(executionGroupContext.getExecutionID(), eventBusContext);
            return result;
        } finally {
            ExecuteProcessEngine.clean();
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As shown above, &lt;code&gt;ExecuteProcessEngine.initialize(queryContext, executionGroupContext, eventBusContext);&lt;/code&gt; will store the SQL information in the two cache Maps. Finally, &lt;code&gt;ExecuteProcessEngine.clean();&lt;/code&gt; in the code block will clear up the Map in the cache.&lt;/p&gt;

&lt;p&gt;The SQL shown in the &lt;code&gt;Show processlist&lt;/code&gt; was obtained from &lt;code&gt;processContexts&lt;/code&gt;. But this Map is just a local cache. If ShardingSphere is deployed in cluster mode, how does &lt;code&gt;Show processlist&lt;/code&gt; obtain SQL running on other machines in the cluster? Let's see how ShardingSphere handles it.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.2 How does &lt;code&gt;Show processlist&lt;/code&gt; work?
&lt;/h3&gt;

&lt;p&gt;When ShardingSphere receives the &lt;code&gt;Show process&lt;/code&gt; command, it is sent to the executor &lt;code&gt;ShowProcessListExecutor#execute&lt;/code&gt; for processing. The implementation of the &lt;code&gt;getQueryResult()&lt;/code&gt; is the focus.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public final class ShowProcessListExecutor implements DatabaseAdminQueryExecutor {

    private Collection&amp;lt;String&amp;gt; batchProcessContexts;

    @Getter
    private QueryResultMetaData queryResultMetaData;

    @Getter
    private MergedResult mergedResult;

    public ShowProcessListExecutor() {
        ProxyContext.getInstance().getContextManager().getInstanceContext().getEventBusContext().register(this);
    }

    @Subscribe
    public void receiveProcessListData(final ShowProcessListResponseEvent event) {
        batchProcessContexts = event.getBatchProcessContexts();
    }

    @Override
    public void execute(final ConnectionSession connectionSession) {
        queryResultMetaData = createQueryResultMetaData();
        mergedResult = new TransparentMergedResult(getQueryResult());
    }

    private QueryResult getQueryResult() {
        ProxyContext.getInstance().getContextManager().getInstanceContext().getEventBusContext().post(new ShowProcessListRequestEvent());
        if (null == batchProcessContexts || batchProcessContexts.isEmpty()) {
            return new RawMemoryQueryResult(queryResultMetaData, Collections.emptyList());
        }
        Collection&amp;lt;YamlExecuteProcessContext&amp;gt; processContexts = new LinkedList&amp;lt;&amp;gt;();
        for (String each : batchProcessContexts) {
            processContexts.addAll(YamlEngine.unmarshal(each, BatchYamlExecuteProcessContext.class).getContexts());
        }
        List&amp;lt;MemoryQueryResultDataRow&amp;gt; rows = processContexts.stream().map(processContext -&amp;gt; {
            List&amp;lt;Object&amp;gt; rowValues = new ArrayList&amp;lt;&amp;gt;(8);
            rowValues.add(processContext.getExecutionID());
            rowValues.add(processContext.getUsername());
            rowValues.add(processContext.getHostname());
            rowValues.add(processContext.getDatabaseName());
            rowValues.add("Execute");
            rowValues.add(TimeUnit.MILLISECONDS.toSeconds(System.currentTimeMillis() - processContext.getStartTimeMillis()));
            int processDoneCount = processContext.getUnitStatuses().stream().map(each -&amp;gt; ExecuteProcessConstants.EXECUTE_STATUS_DONE == each.getStatus() ? 1 : 0).reduce(0, Integer::sum);
            String statePrefix = "Executing ";
            rowValues.add(statePrefix + processDoneCount + "/" + processContext.getUnitStatuses().size());
            String sql = processContext.getSql();
            if (null != sql &amp;amp;&amp;amp; sql.length() &amp;gt; 100) {
                sql = sql.substring(0, 100);
            }
            rowValues.add(null != sql ? sql : "");
            return new MemoryQueryResultDataRow(rowValues);
        }).collect(Collectors.toList());
        return new RawMemoryQueryResult(queryResultMetaData, rows);
    }

    private QueryResultMetaData createQueryResultMetaData() {
        List&amp;lt;RawQueryResultColumnMetaData&amp;gt; columns = new ArrayList&amp;lt;&amp;gt;();
        columns.add(new RawQueryResultColumnMetaData("", "Id", "Id", Types.VARCHAR, "VARCHAR", 20, 0));
        columns.add(new RawQueryResultColumnMetaData("", "User", "User", Types.VARCHAR, "VARCHAR", 20, 0));
        columns.add(new RawQueryResultColumnMetaData("", "Host", "Host", Types.VARCHAR, "VARCHAR", 64, 0));
        columns.add(new RawQueryResultColumnMetaData("", "db", "db", Types.VARCHAR, "VARCHAR", 64, 0));
        columns.add(new RawQueryResultColumnMetaData("", "Command", "Command", Types.VARCHAR, "VARCHAR", 64, 0));
        columns.add(new RawQueryResultColumnMetaData("", "Time", "Time", Types.VARCHAR, "VARCHAR", 10, 0));
        columns.add(new RawQueryResultColumnMetaData("", "State", "State", Types.VARCHAR, "VARCHAR", 64, 0));
        columns.add(new RawQueryResultColumnMetaData("", "Info", "Info", Types.VARCHAR, "VARCHAR", 120, 0));
        return new RawQueryResultMetaData(columns);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll use the &lt;code&gt;guava&lt;/code&gt; package's &lt;code&gt;EventBus&lt;/code&gt; function, which is an information publish/subscribe database that is an elegant implementation of the &lt;a href="https://en.wikipedia.org/wiki/Observer_pattern" rel="noopener noreferrer"&gt;Observer pattern&lt;/a&gt;. &lt;code&gt;EventBus&lt;/code&gt; decouples classes from each other, and you'll find out more about it below.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;getQueryResult()&lt;/code&gt; method will post &lt;code&gt;ShowProcessListRequestEvent&lt;/code&gt;. &lt;code&gt;ProcessRegistrySubscriber#loadShowProcessListData&lt;/code&gt; uses the &lt;code&gt;@Subscribe&lt;/code&gt; annotations to subscribe to the event.&lt;/p&gt;

&lt;p&gt;This method is the core to implementing &lt;code&gt;Show processlist&lt;/code&gt;. Next, we'll introduce specific procedures of this method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public final class ProcessRegistrySubscriber {    
    @Subscribe
    public void loadShowProcessListData(final ShowProcessListRequestEvent event) {
        String processListId = new UUID(ThreadLocalRandom.current().nextLong(), ThreadLocalRandom.current().nextLong()).toString().replace("-", "");
        boolean triggerIsComplete = false;
        // 1. Obtain the Process List path of all existing proxy nodes in cluster mode
        Collection&amp;lt;String&amp;gt; triggerPaths = getTriggerPaths(processListId);
        try {
            // 2. Iterate through the path and write an empty string to the node, to trigger the node monitoring.
            triggerPaths.forEach(each -&amp;gt; repository.persist(each, ""));
            // 3. Lock and wait 5 seconds for each node to write the information of currently running SQL to the persistence layer. 
            triggerIsComplete = waitAllNodeDataReady(processListId, triggerPaths);
            // 4. Fetch and aggregate the data written by each proxy node from the persistence layer. Then EventBus will post a ShowProcessListResponseEvent command, which means the operation is completed.
            sendShowProcessList(processListId);
        } finally {
            // 5. Delete resources
            repository.delete(ProcessNode.getProcessListIdPath(processListId));
            if (!triggerIsComplete) {
                triggerPaths.forEach(repository::delete);
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It contains five steps and steps 2 &amp;amp; 3 are the focus.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.2.1 Step 2: the cluster obtains the data implementation
&lt;/h4&gt;

&lt;p&gt;In this step, an empty string will be written to the node &lt;code&gt;/nodes/compute_nodes/process_trigger/&amp;lt;instanceId&amp;gt;:&amp;lt;processlistId&amp;gt;&lt;/code&gt;, which will trigger ShardingSphere's monitoring logic.&lt;/p&gt;

&lt;p&gt;When ShardingSphere is started, the persistence layer will &lt;code&gt;watch&lt;/code&gt; to monitor a series of path changes, such as the addition, deletion, and modification operations of the path &lt;code&gt;/nodes/compute_nodes&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;However, monitoring is an asynchronous process and the main thread does not block, so step 3 is required to lock and wait for each ShardingSphere node to write its currently running SQL information into the persistence layer.&lt;/p&gt;

&lt;p&gt;Let's take a look at how the ShardingSphere handles the monitoring logic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public final class ComputeNodeStateChangedWatcher implements GovernanceWatcher&amp;lt;GovernanceEvent&amp;gt; {

    @Override
    public Collection&amp;lt;String&amp;gt; getWatchingKeys(final String databaseName) {
        return Collections.singleton(ComputeNode.getComputeNodePath());
    }

    @Override
    public Collection&amp;lt;Type&amp;gt; getWatchingTypes() {
        return Arrays.asList(Type.ADDED, Type.UPDATED, Type.DELETED);
    }

    @SuppressWarnings("unchecked")
    @Override
    public Optional&amp;lt;GovernanceEvent&amp;gt; createGovernanceEvent(final DataChangedEvent event) {
        String instanceId = ComputeNode.getInstanceIdByComputeNode(event.getKey());
        if (!Strings.isNullOrEmpty(instanceId)) {
            ...
        } else if (event.getKey().startsWith(ComputeNode.getOnlineInstanceNodePath())) {
            return createInstanceEvent(event);
            // show processlist
        } else if (event.getKey().startsWith(ComputeNode.getProcessTriggerNodePatch())) {
            return createShowProcessListTriggerEvent(event);
            // kill processlistId
        } else if (event.getKey().startsWith(ComputeNode.getProcessKillNodePatch())) {
            return createKillProcessListIdEvent(event);
        }
        return Optional.empty();
    }


    private Optional&amp;lt;GovernanceEvent&amp;gt; createShowProcessListTriggerEvent(final DataChangedEvent event) {
        Matcher matcher = getShowProcessTriggerMatcher(event);
        if (!matcher.find()) {
            return Optional.empty();
        }
        if (Type.ADDED == event.getType()) {
            return Optional.of(new ShowProcessListTriggerEvent(matcher.group(1), matcher.group(2)));
        }
        if (Type.DELETED == event.getType()) {
            return Optional.of(new ShowProcessListUnitCompleteEvent(matcher.group(2)));
        }
        return Optional.empty();
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After &lt;code&gt;ComputeNodeStateChangedWatcher#createGovernanceEvent&lt;/code&gt; monitored the information, it would distinguish which event to create according to the path.&lt;/p&gt;

&lt;p&gt;As shown in the above code, it is a new node, so &lt;code&gt;ShowProcessListTriggerEvent&lt;/code&gt; will be posted. As each ShardingSphere instance will monitor &lt;code&gt;/nodes/compute_nodes&lt;/code&gt;, each instance will process &lt;code&gt;ShowProcessListTriggerEvent&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In this case, single-machine processing is transformed into cluster processing. Let's look at how ShardingSphere handles it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public final class ClusterContextManagerCoordinator {
    @Subscribe
    public synchronized void triggerShowProcessList(final ShowProcessListTriggerEvent event) {
        if (!event.getInstanceId().equals(contextManager.getInstanceContext().getInstance().getMetaData().getId())) {
            return;
        }
        Collection&amp;lt;ExecuteProcessContext&amp;gt; processContexts = ShowProcessListManager.getInstance().getAllProcessContext();
        if (!processContexts.isEmpty()) {
            registryCenter.getRepository().persist(ProcessNode.getProcessListInstancePath(event.getProcessListId(), event.getInstanceId()),
                    YamlEngine.marshal(new BatchYamlExecuteProcessContext(processContexts)));
        }
        registryCenter.getRepository().delete(ComputeNode.getProcessTriggerInstanceIdNodePath(event.getInstanceId(), event.getProcessListId()));
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;ClusterContextManagerCoordinator#triggerShowProcessList&lt;/code&gt; will subscribe to ShowProcessListTriggerEvent, in which &lt;code&gt;processContext&lt;/code&gt; data is processed by itself. &lt;code&gt;ShowProcessListManager.getInstance().getAllProcessContext()&lt;/code&gt; retrieves the &lt;code&gt;processContext&lt;/code&gt; that is currently running (here the data refers to the SQL information that ShardingSphere stores in the Map before each SQL execution, which is described at the beginning of the article) and transfers it to the persistence layer. If the &lt;code&gt;/nodes/compute_nodes/process_trigger/&amp;lt;instanceId&amp;gt;:&amp;lt;processlistId&amp;gt;&lt;/code&gt; node is deleted, the processing is completed.&lt;/p&gt;

&lt;p&gt;When you delete the node, monitoring will also be triggered and &lt;code&gt;ShowProcessListUnitCompleteEvent&lt;/code&gt; will be posted. This event will finally awake the pending lock.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public final class ClusterContextManagerCoordinator {

    @Subscribe
    public synchronized void completeUnitShowProcessList(final ShowProcessListUnitCompleteEvent event) {
        ShowProcessListSimpleLock simpleLock = ShowProcessListManager.getInstance().getLocks().get(event.getProcessListId());
        if (null != simpleLock) {
            simpleLock.doNotify();
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2.2.2 Step 3: lock and wait for the data implementation
&lt;/h4&gt;

&lt;p&gt;ShardingSphere uses the &lt;code&gt;isReady(Paths)&lt;/code&gt; method to determine whether all instances have been processed. It returns &lt;code&gt;true&lt;/code&gt; only when all instances have been processed.&lt;/p&gt;

&lt;p&gt;There is a maximum waiting time of 5 seconds for data processing. If the processing is not completed in 5 seconds, then &lt;code&gt;false&lt;/code&gt; is returned.&lt;/p&gt;

&lt;p&gt;`public final class ClusterContextManagerCoordinator {&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Subscribe
public synchronized void completeUnitShowProcessList(final ShowProcessListUnitCompleteEvent event) {
    ShowProcessListSimpleLock simpleLock = ShowProcessListManager.getInstance().getLocks().get(event.getProcessListId());
    if (null != simpleLock) {
        simpleLock.doNotify();
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}`&lt;/p&gt;

&lt;h4&gt;
  
  
  2.2.3 Aggregate the &lt;code&gt;processList&lt;/code&gt; data and return it
&lt;/h4&gt;

&lt;p&gt;After each instance processed the data, the instance that received the &lt;code&gt;Show processlist&lt;/code&gt; command needs to aggregate the data and then display the result.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public final class ProcessRegistrySubscriber {  

    private void sendShowProcessList(final String processListId) {
        List&amp;lt;String&amp;gt; childrenKeys = repository.getChildrenKeys(ProcessNode.getProcessListIdPath(processListId));
        Collection&amp;lt;String&amp;gt; batchProcessContexts = new LinkedList&amp;lt;&amp;gt;();
        for (String each : childrenKeys) {
            batchProcessContexts.add(repository.get(ProcessNode.getProcessListInstancePath(processListId, each)));
        }
        eventBusContext.post(new ShowProcessListResponseEvent(batchProcessContexts));
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;ProcessRegistrySubscriber#sendShowProcessList&lt;/code&gt; will aggregate the running SQL data into &lt;code&gt;batchProcessContexts&lt;/code&gt;, and then post &lt;code&gt;ShowProcessListResponseEvent&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This event will be consumed by &lt;code&gt;ShowProcessListExecutor#receiveProcessListData&lt;/code&gt;, and the &lt;code&gt;getQueryResult()&lt;/code&gt; method will proceed to show the &lt;code&gt;queryResult&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So far, we’ve completed the execution process of &lt;code&gt;Show processlist&lt;/code&gt; command.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.3 How does Kill  work?
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Kill &amp;lt;processId&amp;gt;&lt;/code&gt; shares a similar logic with &lt;code&gt;Show processlist&lt;/code&gt;, that is to combine &lt;code&gt;EventBus&lt;/code&gt; with the &lt;code&gt;watch&lt;/code&gt; mechanism.&lt;/p&gt;

&lt;p&gt;Since we do not know which SQL the &lt;code&gt;processId&lt;/code&gt; belongs to, it is also necessary to add empty nodes for each instance.&lt;/p&gt;

&lt;p&gt;Through the &lt;code&gt;watch&lt;/code&gt; mechanism, each ShardingSphere instance watches to the new node and checks whether the &lt;code&gt;processId&lt;/code&gt; key is in the cache Map. If yes, fetch the value corresponding to the key.&lt;/p&gt;

&lt;p&gt;The value is a &lt;code&gt;Collection&amp;lt;Statement&amp;gt;&lt;/code&gt; collection. Then you only have to iterate through the &lt;code&gt;Statement&lt;/code&gt; collection and call &lt;code&gt;statement.cancel()&lt;/code&gt; in turn. The underlying layer is &lt;code&gt;java.sql.Statement#cancel()&lt;/code&gt; method called to cancel SQL execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Conclusion
&lt;/h2&gt;

&lt;p&gt;Currently, Apache ShardingSphere can only implement the &lt;code&gt;Show processlist&lt;/code&gt; and &lt;code&gt;Kill &amp;lt;processId&amp;gt;&lt;/code&gt; functions for &lt;a href="https://www.mysql.com/" rel="noopener noreferrer"&gt;MySQL&lt;/a&gt; dialects.&lt;/p&gt;

&lt;p&gt;Once you get to know how they work, and if you’re interested, you‘re welcome to participate in the development of related functions. Our community is very open and anyone who is interested in contributing to open source code is welcome.&lt;/p&gt;

&lt;h2&gt;
  
  
  Relevant Links:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://shardingsphere.apache.org/" rel="noopener noreferrer"&gt;Apache ShardingSphere Official Website&lt;br&gt;
&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/apache/shardingsphere" rel="noopener noreferrer"&gt;Apache ShardingSphere GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://apacheshardingsphere.slack.com/" rel="noopener noreferrer"&gt;Apache ShardingSphere Slack Channel&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Author
&lt;/h2&gt;

&lt;p&gt;Xu Yang, a middleware R&amp;amp;D engineer at &lt;a href="https://www.crunchbase.com/organization/servyou-group" rel="noopener noreferrer"&gt;Servyou Group&lt;/a&gt;. Responsible for the table and database sharding with massive data. An open source enthusiast and ShardingSphere contributor. Currently, he’s interested in developing the kernel module of the ShardingSphere project.&lt;/p&gt;

</description>
      <category>database</category>
      <category>opensource</category>
      <category>sql</category>
      <category>programming</category>
    </item>
    <item>
      <title>Apache ShardingSphere 5.2.0 is Released!</title>
      <dc:creator>Apache ShardingSphere</dc:creator>
      <pubDate>Tue, 13 Sep 2022 02:45:10 +0000</pubDate>
      <link>https://dev.to/apache_shardingsphere/apache-shardingsphere-520-is-released-h16</link>
      <guid>https://dev.to/apache_shardingsphere/apache-shardingsphere-520-is-released-h16</guid>
      <description>&lt;p&gt;Our new 5.2.0 release enhances features such as SQL audit, elastic migration, SQL execution process management, and data governance on the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Since &lt;a href="https://shardingsphere.apache.org/"&gt;Apache ShardingSphere&lt;/a&gt; released version 5.1.2 at the end of June, our community has continued to optimize and enhance its product features. The community merged 1,728 PRs from teams and individuals around the world. The resulting 5.2.0&lt;br&gt;
release has been optimized in terms of its features, performance, testing, documentation, examples, etc.&lt;/p&gt;

&lt;p&gt;The establishment of the &lt;a href="https://github.com/apache/shardingsphere-on-cloud"&gt;shardingsphere-on-cloud&lt;/a&gt; sub-project shows ShardingSphere’s commitment to being cloud native. We welcome anyone interested in Go, database, and cloud to join the shardingsphere-on-cloud community.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The 5.2.0 release brings the following highlights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SQL audit for data sharding.&lt;/li&gt;
&lt;li&gt;Elastic data migration.&lt;/li&gt;
&lt;li&gt;SQL execution process management.&lt;/li&gt;
&lt;li&gt;Shardingsphere-on-cloud sub-project goes live.
Newly added features, including SQL audit for data sharding and MySQL &lt;code&gt;SHOW PROCESSLIST&lt;/code&gt; &amp;amp; &lt;code&gt;KILL&lt;/code&gt;, can enhance users’ capability to manage ShardingSphere.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The SQL audit feature allows users to manage SQL audit to prevent the business system from being disrupted by inefficient SQL. The MySQL &lt;code&gt;SHOW PROCESSLIST&lt;/code&gt; &amp;amp; &lt;code&gt;KILL&lt;/code&gt; feature allows users to quickly view the SQL in execution through the &lt;code&gt;SHOW PROCESSLIST&lt;/code&gt; statement, and forcibly cancel slow SQL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The new version also supports elastic data migration. It supports the migration of data from &lt;a href="https://www.oracle.com/index.html"&gt;Oracle&lt;/a&gt;, &lt;a href="https://www.mysql.com/"&gt;MySQL&lt;/a&gt;, and &lt;a href="https://www.postgresql.org/"&gt;PostgreSQL&lt;/a&gt; to the distributed database ecosystem composed of ShardingSphere + MySQL or PostgreSQL, completing the transformation from a single database to a distributed one.&lt;/strong&gt; The ShardingSphere community will support more features for heterogeneous database migration in future releases. Stay tuned for more updates.&lt;/p&gt;

&lt;p&gt;The new version also transferred Helm Charts from the ShardingSphere repository to the &lt;strong&gt;shardingsphere-on-cloud sub-project. It is designed to provide distributed database solutions of ShardingSphere + MySQL or PostgreSQL on the cloud&lt;/strong&gt;. This version significantly improves SQL parsing support for different databases and upgrades &lt;a href="https://shardingsphere.apache.org/document/5.1.0/en/concepts/distsql/"&gt;DistSQL&lt;/a&gt;’s parameter usage specifications. It removes the &lt;a href="https://dev.tourl"&gt;Memory&lt;/a&gt; operating mode from ShardingSphere mode and supports distributed transactions across multiple logical databases. This post will introduce the updates of ShardingSphere 5.2.0.&lt;/p&gt;
&lt;h2&gt;
  
  
  Highlights
&lt;/h2&gt;
&lt;h3&gt;
  
  
  SQL audit for data sharding
&lt;/h3&gt;

&lt;p&gt;In large-scale data sharding scenarios, if a user executes an SQL query without the sharding feature, the SQL query will be routed to the underlying database for execution.&lt;/p&gt;

&lt;p&gt;As a result, a large number of database connections will be occupied and businesses will be severely affected by timeout or other issues. If the user performs &lt;code&gt;UPDATE&lt;/code&gt;/&lt;code&gt;DELETE&lt;/code&gt; operations, a large amount of data may be incorrectly updated or deleted.&lt;/p&gt;

&lt;p&gt;In response to the above problems, ShardingSphere 5.2.0 provides the SQL audit for data sharding feature and allows users to configure audit strategies. The strategy specifies multiple audit algorithms, and users can decide whether audit rules should be disabled. If any audit algorithm fails to pass, SQL execution will be prohibited. The configuration of SQL audit for data sharding is as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rules:
- !SHARDING
  tables:
    t_order:
      actualDataNodes: ds_${0..1}.t_order_${0..1}
      tableStrategy:
        standard:
          shardingColumn: order_id
          shardingAlgorithmName: t_order_inline
      auditStrategy:
        auditorNames:
          - sharding_key_required_auditor
        allowHintDisable: true
  defaultAuditStrategy:
    auditorNames:
      - sharding_key_required_auditor
    allowHintDisable: true
  auditors:
    sharding_key_required_auditor:
      type: DML_SHARDING_CONDITIONS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In view of complex business scenarios, the new feature allows users to dynamically disable the audit algorithm by using SQL hints so that the business SQL that is allowable in partial scenarios can be executed. Currently, ShardingSphere 5.2.0 has a built-in &lt;code&gt;DML disables full-route audit&lt;/code&gt; algorithm. Users can also implement &lt;code&gt;ShardingAuditAlgorithm&lt;/code&gt; interface by themselves to realize more advanced SQL audit functions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* ShardingSphere hint: disableAuditNames=sharding_key_required_auditor */ SELECT * FROM t_order;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Elastic data migration
&lt;/h3&gt;

&lt;p&gt;Data migration has always been a focus of the ShardingSphere community. Before 5.2.0, users needed to add an external table as a single sharding table, and then modify the sharding rules to trigger the migration, which was too complex and difficult for ordinary users.&lt;/p&gt;

&lt;p&gt;To improve the ease of data migration, ShardingSphere 5.2.0 provides a new data migration feature, coupled with DistSQL for elastic migration. Users can migrate data from the existing single database to the distributed database system composed of ShardingSphere + MySQL or PostgreSQL in an SQL-like manner, achieving the transformation from a single database to a distributed one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yj44lb_7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e0fml4h4oae970zmw5hm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yj44lb_7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e0fml4h4oae970zmw5hm.png" alt="Image description" width="700" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The new feature is capable of migrating Oracle data to PostgreSQL. Users can create sharding rules and sharding tables through DistSQL first, that is to create new distributed databases and tables, and then run &lt;code&gt;MIGRATE TABLE ds.schema.table INTO table&lt;/code&gt; to trigger data migration.&lt;/p&gt;

&lt;p&gt;During the migration process, users can also use the dedicated DistSQL for data migration in the table to manage the migration job status and data consistency. For more information about the new feature, please refer to the official document [&lt;a href="https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/migration/"&gt;Data Migration&lt;/a&gt;].&lt;/p&gt;

&lt;h3&gt;
  
  
  SQL Execution Process Management
&lt;/h3&gt;

&lt;p&gt;The native MySQL database provides the &lt;code&gt;SHOW PROCESSLIST&lt;/code&gt; statement, allowing the user to view the currently running thread. Users can kill the thread with the KILL statement for SQL that takes too long to be temporarily terminated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ipoLGiIS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/64mk1qgyxm81dgunyp3q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ipoLGiIS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/64mk1qgyxm81dgunyp3q.png" alt="Image description" width="700" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;SHOW PROCESSLIST&lt;/code&gt; and &lt;code&gt;KILL&lt;/code&gt; statements are widely used in daily operation and maintenance management. To enhance users’ ability to manage ShardingSphere, version 5.2.0 supports the MySQL &lt;code&gt;SHOW PROCESSLIST&lt;/code&gt; and &lt;code&gt;KILL&lt;/code&gt; statements. When a user executes a &lt;code&gt;DDL/DML&lt;/code&gt; statement through ShardingSphere, ShardingSphere automatically generates a unique UUID identifier as an ID and stores the SQL execution information in each instance.&lt;/p&gt;

&lt;p&gt;The following figure shows the results while executing the &lt;code&gt;SHOW PROCESSLIST&lt;/code&gt; and &lt;code&gt;KILL&lt;/code&gt; statements in ShardingSphere. When the user executes the &lt;code&gt;SHOW PROCESSLIST&lt;/code&gt; statement, ShardingSphere processes the SQL execution information based on the current operating mode.&lt;/p&gt;

&lt;p&gt;If the current mode is cluster mode, ShardingSphere collects and synchronizes the SQL execution information of each compute node through the governance center, and then returns the summary to the user. If the current mode is the standalone mode, ShardingSphere only returns SQL execution information in the current compute node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The user determines whether to execute the &lt;code&gt;KILL&lt;/code&gt; statement based on the result returned by the &lt;code&gt;SHOW PROCESSLIST&lt;/code&gt;, and ShardingSphere cancels the SQL in execution based on the ID in the &lt;code&gt;KILL&lt;/code&gt; statement.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Sx6SkbeV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l4fp1jr1m9f4d0ma9sqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Sx6SkbeV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l4fp1jr1m9f4d0ma9sqt.png" alt="Image description" width="700" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://github.com/apache/shardingsphere-on-cloud"&gt;Shardingsphere-on-cloud&lt;/a&gt; sub-project goes live
&lt;/h3&gt;

&lt;p&gt;Shardingsphere-on-cloud is a project of Apache ShardingSphere providing cloud-oriented solutions. Version 0.1.0 has been released and it has been officially voted as a sub-project of Apache ShardingSphere.&lt;/p&gt;

&lt;p&gt;Shardinsphere-on-cloud will continue to release various configuration templates, deployment scripts, and other automation tools for ShardingSphere on the cloud.&lt;/p&gt;

&lt;p&gt;It will also polish the engineering practices in terms of high availability, data migration, observability, shadow DB, security, and audit, optimize the delivery mode of Helm Charts, and continue to enhance its cloud native management capabilities through Kubernetes Operator. Currently, there are already introductory issues in the project repository to help those who are interested in Go, Database, and Cloud to quickly get up and running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enhancement
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Kernel
&lt;/h3&gt;

&lt;p&gt;The ShardingSphere community optimizes the SQL parsing capability of different databases in this release, greatly improving ShardingSphere’s SQL compatibility.&lt;/p&gt;

&lt;p&gt;Detailed SQL parsing optimization can be seen in the update log section below. It’s a long-term mission for the ShardingSphere community to improve SQL parsing support. Anyone who is interested is welcome to work with us.&lt;/p&gt;

&lt;p&gt;Version 5.2.0 also supports the column-visible feature for MySQL, Oracle, &lt;a href="https://www.microsoft.com/en-us/sql-server/sql-server-downloads"&gt;SQLServer&lt;/a&gt;, and H2 databases, in a bid to meet the requirements of business SQL compatibility during a system upgrade. The read/write splitting feature supports the Cartesian product configuration, which greatly simplifies user configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Access Port
&lt;/h3&gt;

&lt;p&gt;In version 5.2.0, &lt;a href="https://shardingsphere.apache.org/document/current/en/quick-start/shardingsphere-proxy-quick-start/"&gt;ShardingSphere-Proxy&lt;/a&gt; is capable of monitoring specified IP addresses and integrates openGauss database drivers by default. &lt;a href="https://shardingsphere.apache.org/document/current/en/overview/#shardingsphere-jdbc"&gt;ShardingSphere-JDBC&lt;/a&gt; supports c3p0 data sources, and Connection.prepareStatement can specify the columns.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distributed Transaction
&lt;/h3&gt;

&lt;p&gt;In terms of distributed transactions, the original logical database-level transaction manager is adjusted to a global manager, supporting distributed transactions across multiple logical databases.&lt;/p&gt;

&lt;p&gt;At the same time, it removed the XA statement’s ability to control distributed transactions as XA transactions are now automatically managed by ShardingSphere, which simplifies the operation for users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Update logs
&lt;/h2&gt;

&lt;p&gt;Below are all the update logs of ShardingSphere 5.2.0. To deliver a better user experience, this release adjusted the API of part of the functions, which can be seen from the API changes part below.&lt;/p&gt;

&lt;h3&gt;
  
  
  New Feature
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support SQL audit for sharding feature&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support MySQL show processlist and kill process list id feature&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Add dedicated DistSQL for data migration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Basic support for migration of data to heterogeneous database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: New syntax &lt;code&gt;CREATE/ALTER/SHOW MIGRATION PROCESS CONFIGURATION&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: New syntax &lt;code&gt;ALTER MIGRATION PROCESS CONFIGURATION&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: New syntax &lt;code&gt;SHOW MIGRATION PROCESS CONFIGURATION&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: New syntax &lt;code&gt;ADD MIGRATION SOURCE RESOURCE&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: New syntax &lt;code&gt;SHOW SQL_TRANSLATOR RULE&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: New syntax &lt;code&gt;CREATE SHARDING AUDITOR&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: New syntax &lt;code&gt;ALTER SHARDING AUDITOR&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: New syntax &lt;code&gt;SHOW SHARDING AUDIT ALGORITHMS&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Enhancement
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support column visible feature for MySQL, Oracle, SQLServer and H2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support cartesian product configuration for read/write splitting&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support spring namespace and spring boot usage for sql translator&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support JSR-310 Year and Month in IntervalShardingAlgorithm&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support broadcast table update/delete limit statement&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support create index on table(column) statement rewrite when config encrypts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support openGauss cursor, fetch, move, close statement for sharding and read/write splitting&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support encrypt column rewrite when execute column is null in predicate&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support encrypt show create table return logic columns&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support create table with index statement rewrite when config encrypt&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL create operator statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL create materialized view statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL nested comments parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL alter subscription statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL create group statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL alter statictics statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL create foreign table statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL alter server statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL create foreign data wrapper statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL create event trigger statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL security label statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL reindex statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL reassign owned statement and refresh materialized view statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL prepare transaction statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL create collation statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL lock statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL alter rule statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL notify statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support PostgreSQL unlisten statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support Oracle alter function and alter hierarchy statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support Oracle alter pluggable database statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support Oracle alter materialized view log statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support Oracle alter diskgroup statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support Oracle alter operator statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support oracle alter cluster statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support oracle alter audit policy statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support Oracle alter index type statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support Oracle lock table statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support Oracle alter java statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support Oracle inline constraint statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support openGauss geometric operator statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Optimize MySQL visible/invisible parse of create/alter table statements&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support scope of variable prefixed with @@ in MySQL SET statement parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support MySQL create procedure with create view parse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support column segments parse in create index on table statement&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support openGauss cursor, fetch, move, close statement for sharding, readwrite-splitting&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support encrypt column rewrite when execute column is null in predicate&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support encrypt show create table return logic columns&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support create table with index statement rewrite when config encrypt&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support parsing ALTER LOCKDOWN PROFILE in Oracle&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support parsing ALTER MATERIALIZED VIEW in Oracle&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support parsing ALTER MATERIALIZED ZONEMAP in Oracle&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support parsing ALTER LIBRARY in Oracle&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support parsing ALTER INMEMORY JOIN GROUP in Oracle&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support parsing DROP OPERATOR in Oracle&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support parsing DROP RESTORE POINT in Oracle&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support parsing CREATE RESTORE POINT in Oracle&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support parsing DROP INMEMORY JOIN GROUP in Oracle&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support parsing create_bit_xor_table in MySQL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support parsing MySQL DO statement&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support parsing DropServer in openGauss&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support parsing CREATE AGGREGATE In openGauss&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support parsing ALTER ROUTINE in PostgreSQL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Add PostgreSQL Create Cast Statement&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Add PostgreSQL Create Aggregate Statement&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support fetch/move/close cursor statement in PostgreSQL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support Parsing ALTER PUBLICATION in PostgreSQL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Add PostgreSQL Create Access Method Statement&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support Parsing ALTER POLICY in PostgreSQL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support parsing ALTER OPERATOR in PostgreSQL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Add PostgreSQL Copy Statement&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Add PostgreSQL Comment Statement&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support listen statement in postgreSQL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Support DECLARE cursor statement&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Add default serverConfig in helm charts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Assemble openGauss JDBC Driver into Proxy distribution&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: ShardingSphere-Proxy listen on specified IP addresses&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Support COM_STMT_SEND_LONG_DATA in MySQL Proxy&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: SELECT VERSION() support alias in MySQL Proxy&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Fix openGauss Proxy could not be connected if no resource defined&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Support using JRE defined in JAVA_HOME in ShardingSphere-Proxy’s startup script&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Avoid client blocked when OOM occurred in ShardingSphere-Proxy&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Support using c3p0 in ShardingSphere-JDBC&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Support SET NAMES with value quoted by double-quote&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Connection.prepareStatement with columns arguments is available in ShardingSphere-JDBC&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Improve MySQL connect and reconnect&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Fix MySQL json column may cause leak at incremental task&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Add permission check for PostgreSQL data sources&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Incremental migration support for MySQL MGR mode&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Improve job progress persistence&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Start job DistSQL execute and return synchronously&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Inventory migration support table has primary key and unique key&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Close unerlying ElasticJob when stopping job&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Improve logical replication slot name generation for PostgreSQL and openGauss&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Make query DistSQL could be executed when no database selected&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Add worker_id to the result set of &lt;code&gt;SHOW INSTANCE LIST&lt;/code&gt; &amp;amp; &lt;code&gt;SHOW INSTANCE INFO&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Improve the result of &lt;code&gt;EXPORT DATABASE CONFIG&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Support more databases for &lt;code&gt;FORMAT SQL&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Optimize the execution logic of &lt;code&gt;CREATE TRAFFIC RULE&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Add paramter &lt;code&gt;writeDataSourceQueryEnabled&lt;/code&gt; for RDL READWRITE_SPLITTING RULE.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Support &lt;code&gt;assistEncryptor&lt;/code&gt; for Encrypt RDL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Add sharding algorithm type check when &lt;code&gt;CREATE SHARDING TABLE RULE&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Support database discovery to configure multiple groups of high availability under the same logic database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Support ShardingSphere-Proxy to start up under empty logic library&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Support for isolating EventBus events by instance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Support the database to detect changes in the master node and restart the detection heartbeat task&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Support ShardingSphere-Proxy to generate new worker-id when re-registering in cluster mode&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Thrown exception when inserting expression value in shadow column on executing insert&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Support distributed transactions across multiple logical databases&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Support executing truncate in XA &amp;amp; PostgreSQL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Support alter local transaction rule with DistSQL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Support global transaction manager&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Delete support for branch transaction on proxy&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Bug Fix
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kernel: Fix single table metadata refresh error caused by filtering DataSourceContainedRule&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Fix parsing exception caused by the null value of MySQL blob type&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Fix PostgreSQL/openGauss reset statement parse error&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Fix wrong parameter rewrite when use sharding and encrypt&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Fix the failed conversion of Month related classes on IntervalShardingAlgorithm&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Fix NullPointerException when execute select union statement contains subquery&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Fix wrong encrypt rewrite result due to incorrect order of metadata&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Fix MySQL trim function parse error&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Fix MySQL insert values with _binary parse error&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Fix MySQL syntax error cannot be thrown to client&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Avoid EventLoop blocked because of closing JDBC resources&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Correct server status flags returned by MySQL Proxy&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Fix a possible connection leak issue if Proxy client disconnected in transaction&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Fixed a possible consistency issue with the statement being executed when the Proxy client is disconnected&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Avoid pooled connection polluted by executing SET statements&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Make SHOW TABLES FROM work in ShardingSphere-Proxy&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Fix PostgreSQL DDL could not be executed by Extended Query&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Fix SHOW VARIABLES could not be executed in PostgreSQL Proxy without resource&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Fix FileNotFoundException when use ShardingSphere Driver with SpringBoot fatjar&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Fix the problem that the table contains both primary key and unique index at inventory migration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Improve incremental migration, support the latest position in the middle of batch insert event&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Fix the error caused by null field value in openGauss incremental migration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Fix incorrect strategy name in result of &lt;code&gt;SHOW SHARDING TABLE RULES&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Fix current rule config is modified in advance when &lt;code&gt;ALTER SHARDING TABLE RULE&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Fix connection leak when &lt;code&gt;ALTER RESOURCE&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Fix &lt;code&gt;CREATE TRAFFIC RULE&lt;/code&gt; failed when load balance algorithm is null&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Fix that the monitoring heartbeat task was not stopped when the database was discovered and the logical library was deleted&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Fix cluster mode ShardingSphere-JDBC load all logic database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Fix worker-id generated by SnowflakeKeyGenerateAlgorithm in cluster mode may exceed the maximum value&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Shadow DB: Fix &lt;code&gt;DistSQL&lt;/code&gt; adding shadow algorithm exception without shadow data source&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed transaction: Fix cross-database data source confusion caused by same data source name in multiple logical databases&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed transaction: Fix RUL DistSQL execution failure in transaction&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed transaction: Fix begin for PostgreSQL &amp;amp; openGauss&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Agent: Fixed the error of null value in contextManager when collecting metric data&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Refactor
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kernel: ShardingSphere metadata refactoring for splitting actual metadata and logical metadata&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Use ConnectionContext, QueryContext to remove ThreadLocal in FetchOrderByValueQueuesHolder, TrafficContextHolder, SQLStatementDatabaseHolder and TransactionHolder&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Modify the default value of the ShardingSphere-Proxy version in the helm chart&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Docker container will exit if ShardingSphere-Proxy failed to startup&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access port: Helm Charts in ShardingSphere repository are transferred to sub-project shardingsphere-on-cloud&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Plenty of refactor for better code reuse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Add a new category named RUL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Refactor the schedule module and split it into cluster schedule and standalone schedule&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Remove memory mode, keep standalone mode and cluster mode&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Refactoring metadata table loading logic and persistence logic&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Refactoring distributed locks to retain the most concise interface design&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Testing: Refactor: Unify The Creation for Proxy Containers in IT from ENV Modules&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Testing: Refactor: Unify The Configuration for container created by testcontainer&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  API Changes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kernel: Remove SQL passthrough to data source feature&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Add new assistedQueryEncryptorName and remove QueryAssistedEncryptAlgorithm interface&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Refactor readwrite-splitting api to improve user experience&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Remove check-duplicate-table-enabled configuration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kernel: Remove useless config item show-process-list-enabled configuration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Change keyword for part of data migration DistSQL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Redesign part of data migration DistSQL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Unify parameter type specification&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Split &lt;code&gt;SHOW INSTANCE MODE&lt;/code&gt; to &lt;code&gt;SHOW MODE INFO&lt;/code&gt; and &lt;code&gt;SHOW INSTANCE INFO&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Change &lt;code&gt;DROP SCALING jobId&lt;/code&gt; to &lt;code&gt;CLEAN MIGRATION jobId&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DistSQL: Remove &lt;code&gt;COUNT INSTANCE RULES&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: Add database found that high availability supports all the slave libraries to go offline, and the main library undertakes the read traffic configuration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed governance: SnowflakeKeyGenerateAlgorithm supports configuring worker-id in standalone mode&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Shadow DB: Replace &lt;code&gt;sourceDataSourceName&lt;/code&gt; with &lt;code&gt;productionDataSourceName&lt;/code&gt; in Shadow API Configuration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Authority: Remove deprecated native authority provider&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Relevant Links
&lt;/h2&gt;

&lt;p&gt;🔗 &lt;a href="https://shardingsphere.apache.org/document/current/en/downloads/"&gt;Download Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/apache/shardingsphere/blob/master/RELEASE-NOTES.md"&gt;Update Logs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://shardingsphere.apache.org/"&gt;Project Address&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/apache/shardingsphere-on-cloud"&gt;Cloud Sub-project Address&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Community Contribution
&lt;/h2&gt;

&lt;p&gt;The Apache ShardingSphere 5.2.0 release is the result of 1,728 merged PRs, committed by 64 Contributors. Thank you for your efforts!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TnntE7CL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/89g0tt8ugm428lwl8rdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TnntE7CL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/89g0tt8ugm428lwl8rdu.png" alt="Image description" width="700" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Author
&lt;/h2&gt;

&lt;p&gt;Duan Zhengqiang, a senior middleware development engineer at &lt;a href="https://www.sphere-ex.com/en/"&gt;SphereEx&lt;/a&gt; &amp;amp; Apache ShardingSphere PMC.&lt;/p&gt;

&lt;p&gt;He started to contribute to Apache ShardingSphere middleware in 2018 and used to play a leading role in sharding practices dealing with massive data. With rich practical experience, he loves open-source and is willing to contribute. Now he focuses on the development of &lt;a href="https://shardingsphere.apache.org/"&gt;Apache ShardingSphere&lt;/a&gt; kernel module.&lt;/p&gt;

</description>
      <category>database</category>
      <category>opensource</category>
      <category>cloud</category>
      <category>algorithms</category>
    </item>
    <item>
      <title>Apache ShardingSphere Enterprise User Case — Energy Monster</title>
      <dc:creator>Apache ShardingSphere</dc:creator>
      <pubDate>Tue, 06 Sep 2022 09:39:41 +0000</pubDate>
      <link>https://dev.to/apache_shardingsphere/apache-shardingsphere-enterprise-user-case-energy-monster-29cb</link>
      <guid>https://dev.to/apache_shardingsphere/apache-shardingsphere-enterprise-user-case-energy-monster-29cb</guid>
      <description>&lt;p&gt;&lt;a href="https://ir.enmonster.com/"&gt;Energy Monster&lt;/a&gt;’s application of &lt;a href="https://shardingsphere.apache.org/document/current/en/overview/#shardingsphere-jdbc"&gt;ShardingSphere-JDBC&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ir.enmonster.com/"&gt;Energy Monster&lt;/a&gt; is a consumer tech company&lt;br&gt;
with the mission to energize everyday life. The company is the largest provider of mobile device charging services in Asia.&lt;/p&gt;

&lt;p&gt;As the company’s business concurrency volume is getting larger, the amount of data generated (users, orders, activities, etc.) increases each day. The traditional relational database has proven to be inadequate in supporting millions or tens of millions of data volumes in a single database or table.&lt;/p&gt;

&lt;p&gt;Performance has been unable to meet the benchmark requirements of business development. Under these circumstances, data sharding is an effective way to solve the problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technology selection
&lt;/h2&gt;

&lt;p&gt;Under the &lt;a href="https://shardingsphere.apache.org/"&gt;Database Plus&lt;/a&gt; concept, ShardingSphere is designed to build an ecosystem on top of heterogeneous databases. The goal is to provide globally scalable and enhanced computing capabilities while maximizing the original database computing capabilities.&lt;/p&gt;

&lt;p&gt;The interaction between applications and databases becomes oriented towards the Database Plus standard, therefore minimizing the impact of database fragmentation on upper-layer services.&lt;/p&gt;

&lt;p&gt;Within the ShardingSphere ecosystem, &lt;a href="https://shardingsphere.apache.org/document/current/en/overview/#shardingsphere-jdbc"&gt;ShardingSphere-JDBC&lt;/a&gt; is positioned as a lightweight Java framework, providing additional services in Java’s JDBC layer.&lt;/p&gt;

&lt;p&gt;It uses the client to directly connect to the database and provide services in the form of a jar package, without additional deployment and dependence. It can be understood as an enhanced version of the JDBC driver, which is fully compatible with JDBC and various ORM frameworks.&lt;/p&gt;

&lt;p&gt;ShardingSphere-JDBC enables developers to focus only on the work outside the data layer by coordinating the data read and write under the data sharding, instead of using business code to manually select databases and tables.&lt;/p&gt;

&lt;h2&gt;
  
  
  Business case
&lt;/h2&gt;

&lt;p&gt;UCS is Energy Monster’s user-centric service providing basic functionality for users on the Server side. In 2018, it was stripped from &lt;a href="https://www.php.net/"&gt;PHP&lt;/a&gt; Server and moved to the Java technology stack to implement microservitization.&lt;/p&gt;

&lt;p&gt;It involves the design of new databases and tables and data cleaning and migration. The whole switchover process was expected to ensure the following functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Stability: smooth release in a short time without halting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Accuracy: ensure accurate cleaning of tens of millions of data volumes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability: solve the performance problems caused by increasing data volume and ensure scalability.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Solutions to data cleansing and migration&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Initial data synchronization.&lt;/li&gt;
&lt;li&gt;The application’s server cuts off the entry (users).&lt;/li&gt;
&lt;li&gt;Data synchronization (updates and new users since the last time point).&lt;/li&gt;
&lt;li&gt;Data cleaning.&lt;/li&gt;
&lt;li&gt;User center release.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F4DXpAEh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2t8jnqgh5pfimhxa07id.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F4DXpAEh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2t8jnqgh5pfimhxa07id.png" alt="Image description" width="517" height="691"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Data sharding strategy&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The database adopts a database shards design, divided into 16 databases. The default shard key is &lt;code&gt;user_id&lt;/code&gt; and the default sharding strategy &lt;code&gt;user_id&lt;/code&gt; is mod 16, such as &lt;code&gt;${user_id % 16}&lt;/code&gt; for the user table. For SQL that does not carry shard keys, broadcast routing is used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DIaK1cel--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t6nihbk4du8d9h1sd5sf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DIaK1cel--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t6nihbk4du8d9h1sd5sf.png" alt="Image description" width="601" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;user_id&lt;/code&gt; is used as the shard key because &lt;code&gt;user_id&lt;/code&gt; can cover most business scenarios, and other fields possibly can be empty. In the local test, the query of shard key strategy (openId,mobile) took 50ms to 200ms.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Using the sharding algorithm&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are currently three sharding algorithms available.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Standard sharding algorithm. It corresponds to &lt;code&gt;StandardShardingAlgorithm&lt;/code&gt;, used for scenarios that use a single key as the shard key, such as =, IN, BETWEEN AND, &amp;gt;, &amp;lt;, &amp;gt; =, &amp;lt; =.&lt;/li&gt;
&lt;li&gt;Complex sharding algorithm. It corresponds to &lt;code&gt;ComplexKeysShardingAlgorithm&lt;/code&gt;, used for scenarios that use multi-key as the shard key. The logic with multiple shard keys is complex and requires developers to handle it by themselves.&lt;/li&gt;
&lt;li&gt;Hint sharding algorithm. It corresponds to &lt;code&gt;HintShardingAlgorithm&lt;/code&gt;, used for scenarios where the Hint row is used for sharding.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nncNdMqL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5uv7u0nrglequ83gn4ky.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nncNdMqL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5uv7u0nrglequ83gn4ky.png" alt="Image description" width="559" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Upgrading ShardingSphere-JDBC
&lt;/h2&gt;

&lt;p&gt;ShardingSphere-JDBC is used in multiple business scenarios, such as order, inventory, and finance. By 2021, the R&amp;amp;D groups or teams were using different versions of ShardingSphere-JDBC, ranging from 1.X to 4.X, which is difficult to achieve unified maintenance in the later stage.&lt;/p&gt;

&lt;p&gt;Additionally, there are some potential bugs and missing functions in the earlier version. Based on requirements for unified management and availability, we implemented a project to unify the ShardingSphere-JDBC’s versions used by the company and upgrade them to a 4.1.1 stable version in April 2021.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The following problems were encountered during the upgrade:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. It takes a long time to start the service after the upgrade.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ShardingSphere-JDBC checks the metadata consistency of sub-tables when the service is started. The configuration item &lt;code&gt;max.connections.size.per.quer&lt;/code&gt; (maximum number of connections that can be opened per query) is 1 by default. With a large number of tables, the loading process would be slow. You need to refer to the connection pool configuration to improve the loading speed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. There is no response when there is no shard key in the sub-table query.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Toa9eHRp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/641dysf5zoizgr8bmjuq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Toa9eHRp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/641dysf5zoizgr8bmjuq.png" alt="Image description" width="560" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Logical SQL query does not specify shard keys and it queries all the tables according to the whole database tables router in broadcasting routing.&lt;/p&gt;

&lt;p&gt;The configuration items have 108 pieces of real tables in a database. According to the configuration of &lt;code&gt;maxConnectionsizeperquery=50&lt;/code&gt;, ShardingSphere-JDBC uses the connection limit mode, divides the query requests into three groups, and merges the results with in-memory. As a result, 36 database connections are required for one query. But the &lt;code&gt;maxActive&lt;/code&gt; configured by the &lt;a href="https://druid.apache.org/"&gt;druid&lt;/a&gt; thread pool is set to 20, resulting in a deadlock.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hqb5-03s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ydv03pncno3wsd23y7r1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hqb5-03s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ydv03pncno3wsd23y7r1.png" alt="Image description" width="700" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UPZjf2PB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61blt38kcoukha8aguob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UPZjf2PB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61blt38kcoukha8aguob.png" alt="Image description" width="700" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Xno96kcF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2c2w85mdryc53gh4ope5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xno96kcF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2c2w85mdryc53gh4ope5.png" alt="Image description" width="700" height="539"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--moeVCmnN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/568lubr5zg52ropjw374.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--moeVCmnN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/568lubr5zg52ropjw374.png" alt="Image description" width="700" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Combine &lt;code&gt;check.table.metadata.enabled=true&lt;/code&gt;（check the metadata consistency in sub-tables when started）and properly configure - &lt;code&gt;maxConnectionSizePerQuery&lt;/code&gt;（maximum number of connections that can be opened by each query).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;maxConnectionSizePerQuery&lt;/code&gt; should be less than the maximum number of active threads configured by the druid thread pool.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. After upgrading from 1.X, an error message “Cannot update Sharding key” is displayed in SQL execution, and the actual shard key value is not updated.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To avoid data query failure caused by changing the shard key value, shard key detection is added to the &lt;code&gt;SQL update&lt;/code&gt; in the 4.X version. The error can be rectified in the following ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;remove the shard key when updating.&lt;/li&gt;
&lt;li&gt;the shard key is added to the &lt;code&gt;where&lt;/code&gt; statement synchronously.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. A start failure is caused when using &lt;code&gt;druid-spring-boot-starter&lt;/code&gt;, which is incompatible with &lt;code&gt;Sharding-datasource&lt;/code&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The druid data connection pool starter will load and create a default data source. This will cause conflicts when ShardingSphere-JDBC creates data sources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. &lt;code&gt;inline strategy&lt;/code&gt; reports an error in range query.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;inline strategy&lt;/code&gt; doesn't support range query by default and the &lt;code&gt;standard strategy&lt;/code&gt; is advised. Add the following configuration if the &lt;code&gt;inline strategy&lt;/code&gt; is needed for the range query.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;spring.shardingsphere.props.allow.range.query.with.inline.sharding: true&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Here all the &lt;code&gt;inline strategy&lt;/code&gt; range queries will query each sub-table in broadcasting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. The “Cannot find owner from table” error is reported.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SQL (simplified):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;select id from&lt;/code&gt; (select id from x) as a group by a.id&lt;/p&gt;

&lt;p&gt;The 4.X version supports limited sub-queries. This problem is caused by the name of the intermediate table. Remove the table alias of &lt;code&gt;select&lt;/code&gt; or &lt;code&gt;group order&lt;/code&gt; or other fields.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://github.com/apache/shardingsphere/issues/4810"&gt;https://github.com/apache/shardingsphere/issues/4810&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;7. The table’s primary key conflicts with the primary key generated by the &lt;a href="https://programming.vip/docs/overview-of-snowflake-algorithm.html"&gt;SNOWFLAKE&lt;/a&gt; algorithm.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ShardingSphere provides flexible ways to configure distributed primary key generation strategies. In the sharding rule configuration module, you can configure the primary key generation strategy for each table.&lt;/p&gt;

&lt;p&gt;By default, the &lt;a href="https://programming.vip/docs/overview-of-snowflake-algorithm.html"&gt;snowflake&lt;/a&gt; algorithm is used to generate long integer data of 64bit. The snowflake generator needs to be configured with:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;spring.shardingsphere.sharding.tables.x.key-generator.props.worker.id = ${dcc.node.id}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tRDfutK9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dbopmsazvtztkb0p2m0z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tRDfutK9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dbopmsazvtztkb0p2m0z.png" alt="Image description" width="560" height="76"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The company uses the &lt;a href="https://www.apolloconfig.com/#/"&gt;apollo&lt;/a&gt; configuration center to deliver the node id of the service instance. The service uses multi-data sources. If you use the YAML file to load sharding configuration, the &lt;code&gt;workId&lt;/code&gt; cannot be automatically loaded into sharding configuration items.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SRM-0GJ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mfxy2xgsl6w0bwgi0hqv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRM-0GJ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mfxy2xgsl6w0bwgi0hqv.png" alt="Image description" width="558" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use the custom generator type based on the built-in &lt;code&gt;SnowflakeShardingKeyGenerator&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If the primary key is used as a shard key, configure &lt;code&gt;max.vibration.offset&lt;/code&gt; based on the data sharding value to increase the vibration range.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q56faD7u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7jo25gkyz57kyttllvsd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q56faD7u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7jo25gkyz57kyttllvsd.png" alt="Image description" width="670" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. The 3.X version reports an error when &lt;code&gt;CASE WHEN&lt;/code&gt; statement is executed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, the 3.X and 4.X versions don’t support the &lt;code&gt;case when&lt;/code&gt; statement.&lt;/p&gt;

&lt;p&gt;The 3.X and 4.X versions have different logics when parsing the shard keys of &lt;code&gt;case when&lt;/code&gt;'s &lt;code&gt;update&lt;/code&gt; statement. The 4.X &lt;code&gt;parserEngine.parse&lt;/code&gt; method will ignore the &lt;code&gt;case when&lt;/code&gt; parsing parameters, resulting in inconsistency with the external parameter list and an error when 3.X executes the normal SQL.&lt;/p&gt;

&lt;p&gt;The 3.X version works correctly because the first parameter of &lt;code&gt;case when&lt;/code&gt; is intentionally set to the shard key when the SQL is written, and the &lt;code&gt;case when&lt;/code&gt; statement comes first.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://github.com/apache/shardingsphere/issues/13233"&gt;https://github.com/apache/shardingsphere/issues/13233&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Solutions：&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is suggested to rewrite SQL as the &lt;code&gt;case when&lt;/code&gt; is not supported.&lt;/li&gt;
&lt;li&gt;According to the shard key parsing logic in version 4.1.1, &lt;code&gt;case when&lt;/code&gt; is placed at the end, and the shard key remains the first parameter of &lt;code&gt;case when&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;9. The logical table &lt;code&gt;actualDataNodes&lt;/code&gt; is configured and no default value error is reported for the primary key.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HQY7jUaa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7tpr0fitx420itauw5f8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HQY7jUaa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7tpr0fitx420itauw5f8.png" alt="Image description" width="560" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TJgLU0ua--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pprbsvn5n8tdrm2mezs7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TJgLU0ua--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pprbsvn5n8tdrm2mezs7.png" alt="Image description" width="560" height="80"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;check.table.metadata.enabled=true&lt;/code&gt; is not configured for service, and the metadata consistency of sub-tables is not checked by default.&lt;/p&gt;

&lt;p&gt;The first table of &lt;code&gt;actualDataNodes&lt;/code&gt; configured by services does not exist, resulting in an empty &lt;code&gt;GenerateKeyContenxt&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6QPyzJCr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a75kpppwhniaubgw3iti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6QPyzJCr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a75kpppwhniaubgw3iti.png" alt="Image description" width="560" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bydfluKQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cw16ejpxmwvcva9bd9t4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bydfluKQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cw16ejpxmwvcva9bd9t4.png" alt="Image description" width="560" height="56"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions：&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure &lt;code&gt;check.table.metadata.enabled=true&lt;/code&gt;. A non-existent table is detected when started and an error is reported.&lt;/li&gt;
&lt;li&gt;Rewrite the &lt;code&gt;actualDataNodes inline&lt;/code&gt; expression to make sure that the first table exists.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;10. In version 3.0, there is a deadlock under the high concurrency of the full database and table router.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ShardingSphere-JDBC uses local transactions by default. In local transactions, the database connection is obtained asynchronously. Under high concurrency, it is possible that all database connections cannot be obtained, resulting in a deadlock.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sWO5OVZr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5eo47dqy991nmvu048k8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sWO5OVZr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5eo47dqy991nmvu048k8.png" alt="Image description" width="560" height="41"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d3UGzvmp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hdnnfskjri9un1wj6n3h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d3UGzvmp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hdnnfskjri9un1wj6n3h.png" alt="Image description" width="560" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xUae_hal--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y8fpd887zvkg5oovt4hm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xUae_hal--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y8fpd887zvkg5oovt4hm.png" alt="Image description" width="560" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FsG-WL2h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pzqw4c89j1mplfmwfxz0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FsG-WL2h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pzqw4c89j1mplfmwfxz0.png" alt="Image description" width="560" height="104"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As a &lt;a href="https://shardingsphere.apache.org/"&gt;ShardingSphere&lt;/a&gt; core user, &lt;a href="https://ir.enmonster.com/"&gt;Energy Monster&lt;/a&gt;’s upgrade process also reflects some problems that community users may encounter in the application of ShardingSphere.&lt;/p&gt;

&lt;p&gt;Currently, Apache ShardingSphere’s stable version has been updated to 5.1.2 and has been optimized in terms of its functions, performance, testing, documentation, and examples.&lt;/p&gt;

&lt;p&gt;You can refer to &lt;a href="https://shardingsphere.apache.org/"&gt;Apache ShardingSphere’s official website&lt;/a&gt; for more information. If you have any questions or suggestions, you are also welcome to give feedback on &lt;a href="https://github.com/apache/shardingsphere"&gt;Github&lt;/a&gt;. The community will actively respond and discuss.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project Links:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/apache/shardingsphere/issues?page=1&amp;amp;q=is%3Aopen+is%3Aissue+label%3A%22project%3A+OpenForce+2022%22"&gt;ShardingSphere Github&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/ShardingSphere"&gt;ShardingSphere Twitter&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://join.slack.com/t/apacheshardingsphere/shared_invite/zt-sbdde7ie-SjDqo9~I4rYcR18bq0SYTg"&gt;ShardingSphere Slack&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://shardingsphere.apache.org/community/cn/contribute/"&gt;Contributor Guide&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/apache/shardingsphere/issues"&gt;GitHub Issues&lt;br&gt;
&lt;/a&gt;&lt;br&gt;
&lt;a href="https://shardingsphere.apache.org/community/en/contribute/"&gt;Contributor Guide&lt;/a&gt;&lt;/p&gt;

</description>
      <category>database</category>
      <category>java</category>
      <category>opensource</category>
      <category>saas</category>
    </item>
  </channel>
</rss>
