Planet MySQL

Planet MySQL - https://planet.mysql.com
  1. Codership is proud to announce the first Generally Available (GA) release of Galera Cluster 4 for MySQL 8 and improve MySQL High Availability a great deal. The current release comes with MySQL 8.0.19 and includes the Galera Replication Library 4.5 with wsrep API version 26. You can download it now (and note that we have packages for various Linux distributions).  Galera 4 and MySQL 8.0.19 have many new features, but here are some of the highlights: Streaming replication to support large transactions by splitting transaction replication then applying them in smaller fragments. You can use this feature to load data faster, as data is written to all nodes simultaneously (or not at all in case of a failure in any single node). Improved foreign key support, as write set certification rules are optimised and there will be a reduction in the number of foreign key related false conflicts in certifications. Group commit is supported and integrated with the native MySQL 8 binary log group commit code. Within the codebase, the commit time concurrency controls were reworked such that the commit monitor is released as soon as the commit has been queued for a group commit. This allows transactions to be committed in groups, while still respecting the sequential commit order. There are new system tables for Galera Cluster that are added to the mysql database: wsrep_cluster, wsrep_cluster_members and wsrep_streaming_log. You can now view cluster membership via system tables. New synchronization functions have been introduced to help applications implement read-your-writes and monotonic-reads consistency guarantees. These functions are: WSREP_LAST_SEEN_GTID(), WSREP_LAST_WRITTEN_GTID() and WSREP_SYNC_WAIT_UPTO_GTID(). The resiliency of Galera Cluster against poor network conditions has been improved. Handling of irrecoverable errors due to poor network conditions has also been improved, so that a node will always attempt to leave the cluster gracefully if it is not possible to recover from errors without sacrificing data consistency. This will help your geo-distributed multi-master MySQL clusters tremendously. This release also deprecates the system variables: wsrep_preordered and wsrep_mysql_replication_bundle. We are pleased to offer packages for CentOS 7, CentOS 8, Ubuntu 18.04, Ubuntu 20.04, Debian 10, OpenSUSE 15, and SUSE Linux Enterprise (SLES) 15 SP1. Installation instructions are similar to previous releases of Galera Cluster. In addition to the release, we are going to run a webinar to introduce this new release to you. Join us for Galera Cluster 4 for MySQL 8 Release Webinar happening Thursday  June 4 at 9-10 AM PDT or 2-3 PM EEST (yes, we are running two separate webinars for the Americas and European timezones).   EMEA webinar 4th of June, 2-3 PM EEST  (Eastern European Time) JOIN THE EMEA WEBINAR  USA webinar 4th of June, 9-10 AM PDT JOIN THE USA WEBINAR
  2. Tweet Today 23 May 2020, it is 25 years since the first release of MySQL. So, I would like to take the opportunity to wish MySQL – and Sakila – a happy birthday. My own MySQL journey started in 2006 when I at a job interview was told that if I got the job, I would need to learn MySQL before starting. Since the job involved PHP coding, I got the book Web Database Applications with PHP and MySQL as well as Managing and Using MySQL. Around a week later, I started in my first job involving MySQL at Noggin Pty Ltd. The two first MySQL books, I studied.This was in the days of MySQL 5.0 when stored functions, procedures, and triggers were new, and statement based replication was the only binary log format around. The job evolved into including database administration, and over the next four years and a bit, I got the chance to work around large parts of the corners of MySQL. One of the most interesting tasks was to develop the database backend for a messaging system that had to support active-active replication (to allow the customers to keep using the service even if they couldn't reach both data centers). To avoid the data diverging, conflict resolution was implemented through triggers (this was possible, because MySQL 5.0 exclusively used statement based replication). Advice I do not recommend you to go down the road of active-active replication unless you use Group Replication (and even then, you are probably better off staying with single primary mode) or MySQL NDB Cluster. This was also the period, when I decided to pursue the MySQL 5.0 developer and DBA certifications. Back then, each certification consisted of two exams, and there was an official study guide. The MySQL 5.0 Certification Study GuideThe next step in my MySQL journey took me to MySQL itself where I worked as a MySQL technical support engineer. I have many fond memories from my years with Oracle, and it was an invaluable experience. You learn a lot by helping customers as you encounter a lot of different cases you would never encounter by working on your own database. It was also great participating in the discussions with the developers and product management. A part of the support job also included writing items for the MySQL 5.6, 5.7, and 8 certification exams, so in that way I went from being a candidate to becoming the “examiner”. (Note, writing the exam items was a team effort including long calls to review each others items.) The latest step in my journey is as a database reliability engineer at Okta, so yet another way of working with MySQL. All in all, it has been an interesting journey and I hope both my personal MySQL journey and that of MySQL itself will continue for many more years. Happy birthday. Tweet
  3. MySQL does not limit the number of slaves that you can connect to the master server in a replication topology. However, as the number of slaves increases, they will have a toll on the master resources because the binary logs will need to be served to different slaves working at different speeds. If the data churn on the master is high, the serving of binary logs alone could saturate the network interface of the master. A classic solution for this problem is to deploy a binlog server – an intermediate proxy server that sits between the master and its slaves. The binlog server is set up as a slave to the master, and in turn, acts as a master to the original set of slaves. It receives binary log events from the master, does not apply these events, but serves them to all the other slaves. This way, the load on the master is tremendously reduced, and at the same time, the binlog server serves the binlogs more efficiently to slaves since it does not have to do any other database server processing. Ripple is an open source binlog server developed by Pavel Ivanov. A blog post from Percona, titled MySQL Ripple: The First Impression of a MySQL Binlog Server, gives a very good introduction to deploying and using Ripple. I had an opportunity to explore Ripple in some more detail and wanted to share my observations through this post. 1. Support for GTID based replication Ripple supports only GTID mode, and not file and position-based replication. If your master is running in non-GTID mode, you will get this error from Ripple: Failed to read packet: Got error reading packet from server: The replication sender thread cannot start in AUTO_POSITION mode: this server has GTID_MODE = OFF instead of ON. You can specify Server_id and UUID for the ripple server using the cmd line options:  -ripple_server_id and  -ripple_server_uuid Both are optional parameters, and if not specified, Ripple will use the default server_id=112211 and uuid will be auto generated. 2. Connecting to the master using replication user and password While connecting to the master, you can specify the replication user and password using the command line options:  -ripple_master_user and  -ripple_master_password 3. Connection endpoint for the Ripple server You can use the command line options -ripple_server_ports and -ripple_server_address to specify the connection end points for the Ripple server. Ensure to specify the network accessible hostname or IP address of your Ripple server as the  -rippple_server_address. Otherwise, by default, Ripple will bind to localhost and hence you will not be able to connect to it remotely. 4. Setting up slaves to the Ripple server You can use the CHANGE MASTER TO command to connect your slaves to replicate from the Ripple server. To ensure that Ripple can authenticate the password that you use to connect to it, you need to start Ripple by specifying the option -ripple_server_password_hash For example, if you start the ripple server with the command: rippled -ripple_datadir=./binlog_server -ripple_master_address= <master ip>  -ripple_master_port=3306 -ripple_master_user=repl -ripple_master_password='password' -ripple_server_ports=15000 -ripple_server_address='172.31.23.201' -ripple_server_password_hash='EF8C75CB6E99A0732D2DE207DAEF65D555BDFB8E' you can use the following CHANGE MASTER TO command to connect from the slave: CHANGE MASTER TO master_host='172.31.23.201', master_port=15000, master_password=’XpKWeZRNH5#satCI’, master_user=’rep’ Note that the password hash specified for the Ripple server corresponds to the text password used in the CHANGE MASTER TO command. Currently, Ripple does not authenticate based on the usernames and accepts any non-empty username as long as the password matches. Exploring MySQL Binlog Server - RippleClick To Tweet 5. Ripple server management It’s possible to monitor and manage the Ripple server using the MySQL protocol from any standard MySQL client. There are a limited set of commands that are supported which you can see directly in the source code on the mysql-ripple GitHub page. Some of the useful commands are: SELECT @@global.gtid_executed; – To see the GTID SET of the Ripple server based on its downloaded binary logs. STOP SLAVE; – To disconnect the Ripple server from the master. START SLAVE; – To connect the Ripple server to the master. Known Issues & Suggestions for Improvement 1. I did not see an option to set up an SSL replication channel from a Ripple server to the master As a result of this, Ripple server will not be able to connect to a master that mandates encrypted connections. Attempting to connect will result in the error: 0322 09:01:36.555124 14942 mysql_master_session.cc:164] Failed to connected to host: <Hosname>, port: 3306, err: Failed to connect: Connections using insecure transport are prohibited while --require_secure_transport=ON. 2. I was not able to get Ripple server working with the semi-sync option I started the Ripple server using the option -ripple_semi_sync_slave_enabled=true On connecting it, the master was able to detect the Ripple server as a semi-sync enabled slave. mysql> show status like 'rpl%'; ------------------------------------------------------ | Variable_name                              | Value | ------------------------------------------------------ | Rpl_semi_sync_master_clients               | 1     | | Rpl_semi_sync_master_status                | ON    | | Rpl_semi_sync_slave_status                 | OFF   | ------------------------------------------------------ However, trying to execute a transaction in semi-sync mode waited for rpl_semi_sync_master_timeout which was 180000 mysql> create database d12; Query OK, 1 row affected (3 min 0.01 sec) I could see that semi-sync got turned off at the master: mysql> show status like 'rpl%'; +--------------------------------------------+-------+ | Variable_name                              | Value | +--------------------------------------------+-------+ | Rpl_semi_sync_master_clients               | 1     | | Rpl_semi_sync_master_status                | OFF   | | Rpl_semi_sync_slave_status                 | OFF   | +--------------------------------------------+-------+ Corresponding snippet from the mysql error logs: 2020-03-21T10:05:56.000612Z 52 [Note] Start binlog_dump to master_thread_id(52) slave_server(112211), pos(, 4) 2020-03-21T10:05:56.000627Z 52 [Note] Start semi-sync binlog_dump to slave (server_id: 112211), pos(, 4) 20020-03-21T10:08:55.873990Z 2 [Warning] Timeout waiting for reply of binlog (file: mysql-bin.000010, pos: 350), semi-sync up to file , position 4. 2020-03-21T10:08:55.874044Z 2 [Note] Semi-sync replication switched OFF. There is an issue reported along similar lines here on the MySQL Ripple Github page. 3. Issue when using parallel replication for the slaves of Ripple server I saw that SQL thread on the slave would often stop with the error: Last_SQL_Error: Cannot execute the current event group in the parallel mode. Encountered event Gtid, relay-log name /mysql_data/relaylogs/relay-log.000005, position 27023962 which prevents execution of this event group in parallel mode. Reason: The master event is logically timestamped incorrectly. Analyzing the relay log and position above revealed that the ‘sequence number’ of the transaction at this point was reset to 1. I tracked down the cause to a binlog rotation happening on the original master. Typically, for direct slaves, there is a rotate event due to which relay logs would also rotate based on master binary log rotation. My assessment is that such conditions can be detected and sequence number reset can be handled by parallel threads. But when the sequence number changes without the rotation of the relay logs, we see the parallel threads failing. This observation is reported as the issue: slave parallel thread failure while syncing from binlog server #26 4. mysqlbinlog utility does not work on the binary logs produced by Ripple server Trying to run the mysqlbinlog utility on the binary log resulted in the error: ERROR: Error in Log_event::read_log_event(): 'Found invalid event in binary log', data_len: 43, event_type: -106 This issue is raised here: Not able to open the binary log files using mysqlbinlog utility. #25 It’s acknowledged by the author as a known issue. I feel that it would be useful to support this utility for debugging purposes. That’s the report for now from my quick testing. I plan to update this blog post as and when I come across more findings on Ripple. Overall, I found it to be simple and straightforward to use and has the potential to become a standard for binlog servers in MySQL environments. More tips for you MySQL Server Health Checks In a MySQL master-slave high availability (HA) setup, it is important to continuously monitor the health of the master and slave servers so you can detect potential issues and take corrective actions. Learn more MySQL Rolling Index Builds How to optimize the MySQL index creation process in such a way that your regular workload is not impacted. If you have a MySQL master-slave replica set, you can create the index one node at a time in a rolling fashion.Learn more MySQL High Availability The availability of a system is the percentage of time its services are up during a period of time. It’s generally expressed as a series of 9′s. See the availability and the corresponding downtime measured over one year. Learn more
  4. Database backups are a fundamental requirement in almost every implementation, no matter the size of the company or the nature of the application. Taking a backup should be a simple task that can be automated to ensure it’s done consistently and on schedule. Percona has an enterprise-grade backup tool, Percona XtraBackup, that can be used to accomplish these tasks. Percona also has a Percona Kubernetes Operator for Percona XtraDB Cluster (PXC Operator), which has Percona XtraBackup built into it. Percona XtraBackup has the ability for both automated and on-demand backups. Today we will explore taking backups and restoring these backups using the PXC Operator deployed on Google Cloud Platform’s Google Kubernetes Engine. Backup Types There are two different storage types and backup methods we can use to store backups via the PXC Operator. The first storage type we can use is a Persistent Volume Claim (PVC), which is essentially a ‘ticket’ that requests to use a storage class defined as a Persistent Volume (PV). A PV can be a variety of storage types such as local disk, NFS, or most commonly a block storage (GCP Persistent Disk, AWS EBS, etc.). The second type of storage for backups is by using the S3 protocol or S3 protocol compatible object storage. There are also two types of backups we can take in regards to the timing of backups. We can take on-demand backups or scheduled backups. The PXC Operator’s deploy/cr.yaml can be edited to schedule backups that are automated.  We can configure the backup schedule using Unix cron string format. We can also take a backup on-demand, by running a single command which we will demonstrate below. Persistent Volume Backups When a backup job is launched using a PVC, the PXC Operator will launch a Pod that will connect to the PXC as a 4th member and leverage PXC’s native auto-join capability by starting a State Snapshot (SST). An SST is a full data copy from one node (donor) to the joining node (joiner). The SST is received as a single xbstream file and stored in the volume along with an MD5 checksum of the backup. Xbstream is a custom streaming format that supports simultaneous compression and streaming. Object Storage Backups (S3) Like the PVC method, when an S3-based backup job is launched, the PXC Operator will launch a new Pod that will connect to the PXC as a 4th member and will start an SST. The SST is streamed from a donor node as an xbstream to the configured S3 endpoint using xbcloud. The purpose of xbcloud is to download and upload a full or part of xbstream archive to or from the cloud. S3 Backup Configuration deploy/backup-s3.yaml The first step is to add in our access and secret access keys. Each cloud service provider has a different method of distributing these keys. deploy/cr.yaml Next, we can go to our deploy/cr.yaml and edit the bucket information. If we wanted to edit the automated backup schedule, we would simply go down a few more lines in deploy/cr.yaml and edit the schedule. deploy/backup/backup.yaml Finally, we need to ensure that our storageName in deploy/backup/backup.yaml matches the name that is in our deploy/cr.yaml. We are now ready to take backups!   On-Demand S3 Backup and Restore We will explore taking an on-demand S3 backup. If you want to use a PVC, this can also be configured in deploy/cr.yaml. Keep in mind that the PVC for the filesystem-type backup storage should request enough space to fit all of the backup data. To take an on-demand backup we would run the command below and can display the backup Pod’s name as well. kubectl apply -f deploy/backup.yaml kubectl get pods | grep backup1 In order to list all of the backups, you would run the following command. kubectl get pxc-backups Now that we have the name of the backup, we would edit the backupName in deploy/backup/restore.yaml In order to restore a backup, we would run the following command. We can also get the name of the restoring Pod. kubectl apply -f deploy/restore.yaml kubectl get pods | grep restore As we saw, backups can be accomplished very quickly and easily thanks to the efforts of Percona’s Engineer Team. If you’d like to learn more about this process, you can take a look at the documentation.
  5. MinervaDB Webinar – Building MySQL Database Infrastructure for Performance and Reliability Recently I did a webinar on ” Building MySQL Database Infrastructure for Performance and Reliability ” , It was big success and thought will share the slides of webinar in this blog. I get lot of emails daily from Database Architects, DBAs, Database Engineers, Technical Managers and Developers worldwide on best practices and checklist to build MySQL for performance, scalability, high availability and database SRE, The objective of this webinar is to share with them a cockpit view of MySQL infrastructure operations from MinervaDB perspective. Database Systems are growing faster than ever, The modern datanomy businesses like Facebook, Uber, Airbnb, LinkedIn etc. are powered by Database Systems, This makes Database Infrastructure operationally complex and we can’t technically scale such systems with eyeballs. Building MySQL operations for web-scale means delivering highly responsive, fault-tolerant and self-healing database infrastructure for business. In this webinar we are discussing following topics: Configuring MySQL for performance and reliability Troubleshooting MySQL with Linux tools Troubleshooting MySQL with slow query log Most common tools used in MinervaDB to troubleshoot MySQL performance Monitoring MySQL performance Building MySQL infrastructure operations for performance, scalability and reliability MySQL Replication You can download PDF of the webinar here The post MinervaDB Webinar – Building MySQL Database Infrastructure for Performance and Reliability appeared first on The WebScale Database Infrastructure Operations Experts.