In this series I'm testing the most basic SQL features using the EMP/DEPT schema. Because you would be surprised how many NewSQL database do not support it. Here is my first test with MariaDB Xpand which is better about referential integrity.
I have created a 3 nodes cluster:
I connect from a MySQL client:
mysql --host xpand-db00008262.mdb0002418.db1.skysql.net --port 5001 --user DB00008262 --default-character-set=utf8 -A -D test -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 291
Server version: 5.0.45-Xpand-6.0.3.1
Copyright (c) 2000, 2022, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> ALTER USER 'DB00008262'@'%' IDENTIFIED BY "newpass";
MariaDB is based on MySQL and I will run the same as in the first post of this series, with auto_increment
as it doesn't support generated always as identity
:
CREATE TABLE dept (
deptno integer NOT NULL,
dname text,
loc text,
description text,
CONSTRAINT pk_dept PRIMARY KEY (deptno asc)
);
CREATE TABLE emp (
empno integer NOT NULL auto_increment,
ename text NOT NULL,
job text,
mgr integer,
hiredate date,
sal integer,
comm integer,
deptno integer NOT NULL,
email text,
other_info json,
CONSTRAINT pk_emp PRIMARY KEY (empno),
CONSTRAINT emp_email_uk UNIQUE (email),
CONSTRAINT fk_deptno FOREIGN KEY (deptno) REFERENCES dept(deptno),
CONSTRAINT fk_mgr FOREIGN KEY (mgr) REFERENCES emp(empno)
);
I got the following error for: email text
ERROR 1170 (HY000): [12299] Invalid blob/text index specification: BLOB/TEXT column "email" used in key specification without a key length
That's already good news compared to the previous MySQL-compatible I have tested: the error message is clear. Let's replace text
with varchar(90)
:
CREATE TABLE emp (
empno integer NOT NULL auto_increment,
ename text NOT NULL,
job text,
mgr integer,
hiredate date,
sal integer,
comm integer,
deptno integer NOT NULL,
email varchar(90),
other_info json,
CONSTRAINT pk_emp PRIMARY KEY (empno),
CONSTRAINT emp_email_uk UNIQUE (email),
CONSTRAINT fk_deptno FOREIGN KEY (deptno) REFERENCES dept(deptno),
CONSTRAINT fk_mgr FOREIGN KEY (mgr) REFERENCES emp(empno)
);
This works and I'm now inserting the 4 departements and 14 employees:
INSERT INTO dept (deptno, dname, loc, description)
VALUES (10, 'ACCOUNTING', 'NEW YORK','preparation of financial statements, maintenance of general ledger, payment of bills, preparation of customer bills, payroll, and more.'),
(20, 'RESEARCH', 'DALLAS','responsible for preparing the substance of a research report or security recommendation.'),
(30, 'SALES', 'CHICAGO','division of a business that is responsible for selling products or services'),
(40, 'OPERATIONS', 'BOSTON','administration of business practices to create the highest level of efficiency possible within an organization');
INSERT INTO emp (empno, ename, job, mgr, hiredate, sal, comm, deptno, email, other_info)
VALUES (7369, 'SMITH', 'CLERK', 7902, '1980-12-17', 800, NULL, 20,'SMITH@acme.com', '{"skills":["accounting"]}'),
(7499, 'ALLEN', 'SALESMAN', 7698, '1981-02-20', 1600, 300, 30,'ALLEN@acme.com', null),
(7521, 'WARD', 'SALESMAN', 7698, '1981-02-22', 1250, 500, 30,'WARD@compuserve.com', null),
(7566, 'JONES', 'MANAGER', 7839, '1981-04-02', 2975, NULL, 20,'JONES@gmail.com', null),
(7654, 'MARTIN', 'SALESMAN', 7698, '1981-09-28', 1250, 1400, 30,'MARTIN@acme.com', null),
(7698, 'BLAKE', 'MANAGER', 7839, '1981-05-01', 2850, NULL, 30,'BLAKE@hotmail.com', null),
(7782, 'CLARK', 'MANAGER', 7839, '1981-06-09', 2450, NULL, 10,'CLARK@acme.com', '{"skills":["C","C++","SQL"]}'),
(7788, 'SCOTT', 'ANALYST', 7566, '1982-12-09', 3000, NULL, 20,'SCOTT@acme.com', '{"cat":"tiger"}'),
(7839, 'KING', 'PRESIDENT', NULL, '1981-11-17', 5000, NULL, 10,'KING@aol.com', null),
(7844, 'TURNER', 'SALESMAN', 7698, '1981-09-08', 1500, 0, 30,'TURNER@acme.com', null),
(7876, 'ADAMS', 'CLERK', 7788, '1983-01-12', 1100, NULL, 20,'ADAMS@acme.org', null),
(7900, 'JAMES', 'CLERK', 7698, '1981-12-03', 950, NULL, 30,'JAMES@acme.org', null),
(7902, 'FORD', 'ANALYST', 7566, '1981-12-03', 3000, NULL, 20,'FORD@acme.com', '{"skills":["SQL","CQL"]}'),
(7934, 'MILLER', 'CLERK', 7782, '1982-01-23', 1300, NULL, 10,'MILLER@acme.com', null);
ERROR 1452 (HY000): [7168] Foreign key constraint violation on insert: Cannot add or update a child row: foreign key "fk_mgr" violated by: (mgr)=(7902)
This error is already good news given that the previous databases I've tested ignored the Foreign Key. It seems that the constraint is not checked per statement but per-row. The whole set is consistent but only atomically.
As a workaround, I'll drop the Foreign Key:
mysql> alter table emp drop constraint fk_mgr;
ERROR 1 (HY000): [12291] Invalid foreign key specification encountered in DDL statement: Cannot drop index 'fk_mgr' ('mgr'): needed in a foreign key constraint 'fk_mgr' on 'emp'
Ok, I didn't know that drop constraint
is not the right way to drop a constraint but I can drop foreign key
:
mysql> alter table emp drop foreign key fk_mgr;
Query OK, 0 rows affected (0.06 sec)
Then, the insert works:
Query OK, 14 rows affected (0.04 sec)
and I can enable the constraint again:
mysql> alter table emp add CONSTRAINT fk_mgr FOREIGN KEY (mgr) REFERENCES emp(empno);
Query OK, 0 rows affected (0.27 sec)
Testing Foreign Key
If I delete a departement which child rows, an exception is raised:
mysql> delete from dept where deptno=10;
ERROR 1451 (HY000): [7169] Foreign key constraint violation on delete: Cannot delete or update a parent row: foreign key "fk_deptno" violated
mysql>
This works as expected. And the error message is clear.
Now testing concurrent transactions.
On session 1:
mysql> start transaction;
Query OK, 0 rows affected (0.02 sec)
mysql> insert into emp(deptno, ename) values (40, 'Franck');
Query OK, 1 row affected (0.03 sec)
On session 2:
mysql> delete from dept where deptno=40;
This waits, which is normal
Session 1:
mysql> rollback;
Query OK, 0 rows affected (0.02 sec)
Session 2:
mysql> delete from dept where deptno=40;
Query OK, 1 row affected (45.43 sec)
Ok, this works as expected. Pessimistic locking with two phase commit I guess.
Testing serializable
First session:
mysql> set transaction isolation level serializable;
Query OK, 0 rows affected (0.01 sec)
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> insert into emp(ename, deptno) values ('Franck',40);
Query OK, 1 row affected (0.03 sec)
While the transaction is running, I open a concurrent session:
mysql> set transaction isolation level serializable;
Query OK, 0 rows affected (0.01 sec)
mysql> start transaction;
Query OK, 0 rows affected (0.01 sec)
mysql> SELECT @@TX_ISOLATION;
+----------------+
| @@TX_ISOLATION |
+----------------+
| SERIALIZABLE |
+----------------+
1 row in set (0.00 sec)
mysql> select count(*) from emp where deptno=40;
The second session waits. Of course, Serializable can be implemented with heavy locks, but that doesn't scale. Here I'm just reading and expect MVCC (Multi-Version Concurrency Control) non-blocking reads.
But the documentation is clear that The Serializable transaction isolation level is not currently available to end user transactions.
After a while, my SELECT returned:
mysql> select count(*) from emp where deptno=40;
+----------+
| count(*) |
+----------+
| 0 |
+----------+
1 row in set (1 min 42.92 sec)
mysql>
but the reason is that my first session was terminated:
mysql> select * from temps;
ERROR 1927 (HY000): Lost connection to backend server: connection closed by peer (@@Xpand-Monitor:node-2)
mysql>
Actually, I'm not even sure the wait is a lock because with a single session:
mysql> set transaction isolation level serializable;
Query OK, 0 rows affected (0.00 sec)
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> delete from dept where deptno=40;
ERROR 1205 (40000): [30720] MVCC serializable conflict: This transaction conflicted with a serializable transaction: `test`.`dept` Primary key: (40); try restarting transaction
mysql>
mysql> drop table emp;
Query OK, 0 rows affected (0.06 sec)
mysql> drop table dept;
Query OK, 0 rows affected (0.04 sec)
mysql>
I dropped the tables to be sure that they were not locked.
Distribution
On my 3 node cluster, the table has been created with 3 shards, visible here as Slices:
mysql> show create table dept;
CREATE TABLE `dept` (
`deptno` int(11) not null,
`dname` text CHARACTER SET utf8,
`loc` text CHARACTER SET utf8,
`description` text CHARACTER SET utf8,
PRIMARY KEY (`deptno`) /*$ DISTRIBUTE=1 */
) CHARACTER SET utf8 /*$ SLICES=3 */;
I create an index on hiredate
, analyze the tables and look at the execution plan:
mysql> create index emp_hiredate on emp(hiredate);
Query OK, 0 rows affected (0.25 sec)
mysql> analyze table dept;
+-----------+---------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+-----------+---------+----------+----------+
| test.dept | analyze | status | OK |
+-----------+---------+----------+----------+
1 row in set (0.01 sec)
mysql> analyze table emp;
+----------+---------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+----------+---------+----------+----------+
| test.emp | analyze | status | OK |
+----------+---------+----------+----------+
1 row in set (0.01 sec)
mysql> explain select * from dept natural
join emp where emp.hiredate>now() order by hiredate;
+----------------------------------------------------------------------------------+-----------+-----------+
| Operation | Est. Cost | Est. Rows |
+----------------------------------------------------------------------------------+-----------+-----------+
| sigma_sort KEYS=[(2 . "hiredate") ASC] | 92.23 | 14.00 |
| nljoin | 45.63 | 14.00 |
| filter (2.hiredate > cast(datetime_trunc(current_timestamp(), 0), datetime)) | 13.43 | 14.00 |
| stream_combine | 13.15 | 14.00 |
| index_scan 2 := emp.__idx_emp__PRIMARY | 10.75 | 14.00 |
| index_scan 1 := dept.__idx_dept__PRIMARY, deptno = 2.deptno | 2.30 | 1.00 |
+----------------------------------------------------------------------------------+-----------+-----------+
6 rows in set (0.03 sec)
That's interesting. From what I've read in the doc, stream_combine
is where the operation (index scan here) is distributed. Here the whole emp
table is read from all slices, then filtered on hiredate
. This is not optimal, probably because the index itself is hash sharded. Let's re-create it as ASCending:
mysql> alter table emp drop index emp_hiredate;
Query OK, 0 rows affected (0.07 sec)
mysql> create index emp_hiredate on emp(hiredate asc);
Query OK, 0 rows affected (0.23 sec)
mysql> explain select * from dept natural
join emp where emp.hiredate>now() order by hiredate;
+-----------------------------------------------------------------------------------------------------------+-----------+-----------+
| Operation | Est. Cost | Est. Rows |
+-----------------------------------------------------------------------------------------------------------+-----------+-----------+
| nljoin | 89.41 | 14.00 |
| msjoin KEYS=[(2 . "hiredate") ASC] | 57.21 | 14.00 |
| stream_merge KEYS=[(2 . "hiredate") ASC] | 17.21 | 14.00 |
| index_scan 2 := emp.emp_hiredate, hiredate > cast(datetime_trunc(current_timestamp(), 0), datetime) | 9.41 | 14.00 |
| index_scan 2 := emp.__idx_emp__PRIMARY, empno = 2.empno | 2.30 | 1.00 |
| index_scan 1 := dept.__idx_dept__PRIMARY, deptno = 2.deptno | 2.30 | 1.00 |
+-----------------------------------------------------------------------------------------------------------+-----------+-----------+
6 rows in set (0.01 sec)
This looks better, with the filter pushed down. The index is read and filtered then joined to the table with a Sort Merge Join on the primary key, then to dept
with Nested Loops.
As dept
is quite static, I can decide to broadcast it to all nodes;
mysql> alter table dept replicas=allnodes;
Query OK, 0 rows affected (0.22 sec)
mysql> explain select * from dept natural
join emp where emp.hiredate>now() order by hiredate;
+--------------------------------------------------------------------------------------+-----------+-----------+
| Operation | Est. Cost | Est. Rows |
+--------------------------------------------------------------------------------------+-----------+-----------+
| sigma_sort KEYS=[(2 . "hiredate") ASC] | 44.16 | 4.71 |
| nljoin | 15.21 | 4.71 |
| index_scan 1 := dept.__idx_dept__PRIMARY | 2.30 | 1.01 |
| stream_combine | 12.78 | 4.67 |
| filter (1.deptno = 2.deptno) | 11.31 | 4.67 |
| filter (2.hiredate > cast(datetime_trunc(current_timestamp(), 0), datetime)) | 11.03 | 14.00 |
| index_scan 2 := emp.__idx_emp__PRIMARY | 10.75 | 14.00 |
+--------------------------------------------------------------------------------------+-----------+-----------+
7 rows in set (0.02 sec)
Here dept is local
and is the driving table to nested loop to the distributed emp
. I would expect a hash join here, but according to the documentation hash_join doesn't fit in with the streaming model of Xpand. I would love to test the performance of nested loops to remote nodes, those stream_combine
and stream_merge
, but Xpand is proprietary software, and the free trial is not sufficient for that.
Top comments (0)