Planet MySQL

Planet MySQL -
  1. In the previous blog we showed you how to set up an environment in Amazon AWS EC2 that consists of a Percona Server 8.0 Replication Cluster (in Master - Slave topology). We deployed ProxySQL and we configured our application (Sysbench).  We also used ClusterControl to make the deployment easier, faster and more stable. This is the environment we ended up with... This is how it looks in ClusterControl: In this blog post we are going to review the requirements and show you how, in this setup, you can seamlessly perform master switches. Seamless Master Switch with ProxySQL 2.0 We are going to benefit from ProxySQL ability to queue connections if there are no nodes available in a hostgroup. ProxySQL utilizes hostgroups to differentiate between backend nodes with different roles. You can see the configuration on the screenshot below. In our case we have two host groups - hostgroup 10 contains writers (master) and hostgroup 20 contains slaves (and also it may contain master, depends on the configuration). As you may know, ProxySQL uses SQL interface for configuration. ClusterControl exposes most of the configuration options in the UI but some settings cannot be set up via ClusterControl (or they are configured automatically by ClusterControl). One of such settings is how the ProxySQL should detect and configure backend nodes in replication environment. mysql> SELECT * FROM mysql_replication_hostgroups; +------------------+------------------+------------+-------------+ | writer_hostgroup | reader_hostgroup | check_type | comment | +------------------+------------------+------------+-------------+ | 10 | 20 | read_only | host groups | +------------------+------------------+------------+-------------+ 1 row in set (0.00 sec) Configuration stored in mysql_replication_hostgroups table defines if and how ProxySQL will automatically assign master and slaves to correct hostgroups. In short, the configuration above tells ProxySQL to assign writers to HG10, readers to HG20. If a node is a writer or reader is determined by the state of variable ‘read_only’. If read_only is enabled, node is marked as reader and assigned to HG20. If not, node is marked as writer and assigned to HG10. On top of that we have a variable: Which determines if writer should also show up in the readers’ hostgroup or not. In our case it is set to ‘True’ thus our writer (master) is also a part of HG20. ProxySQL does not manage backend nodes but it does access them and check the state of them, including the state of the read_only variable. This is done by monitoring user, which has been configured by ClusterControl according to your input at the deployment time for ProxySQL. If the state of the variable changes, ProxySQL will reassign it to proper hostgroup, based on the value for read_only variable and based on the settings in mysql-monitor_writer_is_also_reader variable in ProxySQL. Here enters ClusterControl. ClusterControl monitors the state of the cluster. Should master is not available, failover will occur. It is more complex than that and we explained this process in detail in one of our earlier blogs. What is important for us is that, as long as it is safe, ClusterControl will execute the failover and in the process it will reconfigure read_only variables on old and new master. ProxySQL will see the change and modify its hostgroups accordingly. This will also happen in case of the regular slave promotion, which can easily be executed from ClusterControl by starting this job: The final outcome will be that the new master will be promoted and assigned to HG10 in ProxySQL while the old master will be reconfigured as a slave (and it will be a part of HG20 in ProxySQL). The process of master change may take a while depending on environment, application and traffic (it is even possible to failover in  11 seconds, as my colleague has tested). During this time database (master) will not be reachable in ProxySQL. This leads to some problems. For starters, the application will receive errors from the database and user experience will suffer - no one likes to see errors. Luckily, under some circumstances,  we can reduce the impact. The requirement for this is that the application does not use (at all or at that particular time) multi-statement transactions. This is quite expected - if you have a multi-statement transaction (so, BEGIN; … ; COMMIT;) you cannot move it from server to server because this will no longer be a transaction. In such cases the only safe way is to rollback the transaction and start once more on a new master. Prepared statements are also a no-no: they are prepared on a particular host (master) and they do not exist on slaves so once one slave will be promoted to a new master, it is not possible for it to execute prepared statements which has been prepared on old master. On the other hand if you run only auto-committed, single-statement transactions, you can benefit from the feature we are going to describe below. One of the great features ProxySQL has is an ability to queue incoming transactions if they are directed to a hostgroup that does not have any nodes available. This is defined by following two variables: mysql-connect_timeout_server mysql-connect_timeout_server_max ClusterControl increases them to 20 seconds, allowing even for quite some long failovers to perform without any error being sent to the application. Testing the Seamless Master Switch We are going to run the test in our environment. As the application we are going to use SysBench started as: while true ; do sysbench /root/sysbench/src/lua/oltp_read_write.lua --threads=4 --events=0 --time=3600 --reconnect=1 --mysql-socket=/tmp/proxysql.sock --mysql-user=sbtest --mysql-password=sbtest --tables=32 --report-interval=1 --skip-trx=on --table-size=100000 --db-ps-mode=disable --rate=5 run ; done Basically, we will run sysbench in a loop (in case an error show up). We will run it in 4 threads. Threads will reconnect after every transaction. There will be no multi-statement transactions and we will not use prepared statements. Then we will trigger the master switch by promoting a slave in the ClusterControl UI. This is how the master switch looks like from the application standpoint: [ 560s ] thds: 4 tps: 5.00 qps: 90.00 (r/w/o: 70.00/20.00/0.00) lat (ms,95%): 18.95 err/s: 0.00 reconn/s: 5.00 [ 560s ] queue length: 0, concurrency: 0 [ 561s ] thds: 4 tps: 5.00 qps: 90.00 (r/w/o: 70.00/20.00/0.00) lat (ms,95%): 17.01 err/s: 0.00 reconn/s: 5.00 [ 561s ] queue length: 0, concurrency: 0 [ 562s ] thds: 4 tps: 7.00 qps: 126.00 (r/w/o: 98.00/28.00/0.00) lat (ms,95%): 28.67 err/s: 0.00 reconn/s: 7.00 [ 562s ] queue length: 0, concurrency: 0 [ 563s ] thds: 4 tps: 3.00 qps: 68.00 (r/w/o: 56.00/12.00/0.00) lat (ms,95%): 17.95 err/s: 0.00 reconn/s: 3.00 [ 563s ] queue length: 0, concurrency: 1 We can see that the queries are being executed with low latency. [ 564s ] thds: 4 tps: 0.00 qps: 42.00 (r/w/o: 42.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00 [ 564s ] queue length: 1, concurrency: 4 Then the queries paused - you can see this by the latency being zero and transactions per second being equal to zero as well. [ 565s ] thds: 4 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00 [ 565s ] queue length: 5, concurrency: 4 [ 566s ] thds: 4 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00 [ 566s ] queue length: 15, concurrency: 4 Two seconds in queue is growing, still no response coming from the database. [ 567s ] thds: 4 tps: 20.00 qps: 367.93 (r/w/o: 279.95/87.98/0.00) lat (ms,95%): 3639.94 err/s: 0.00 reconn/s: 20.00 [ 567s ] queue length: 1, concurrency: 4 After three seconds application was finally able to reach the database again. You can see the traffic is now non-zero and the queue length has been reduced. You can see the latency around 3.6 seconds - this is for how long the queries have been paused [ 568s ] thds: 4 tps: 10.00 qps: 116.04 (r/w/o: 84.03/32.01/0.00) lat (ms,95%): 539.71 err/s: 0.00 reconn/s: 10.00 [ 568s ] queue length: 0, concurrency: 0 [ 569s ] thds: 4 tps: 4.00 qps: 72.00 (r/w/o: 56.00/16.00/0.00) lat (ms,95%): 16.12 err/s: 0.00 reconn/s: 4.00 [ 569s ] queue length: 0, concurrency: 0 [ 570s ] thds: 4 tps: 8.00 qps: 144.01 (r/w/o: 112.00/32.00/0.00) lat (ms,95%): 24.83 err/s: 0.00 reconn/s: 8.00 [ 570s ] queue length: 0, concurrency: 0 [ 571s ] thds: 4 tps: 5.00 qps: 98.99 (r/w/o: 78.99/20.00/0.00) lat (ms,95%): 21.50 err/s: 0.00 reconn/s: 5.00 [ 571s ] queue length: 0, concurrency: 1 [ 572s ] thds: 4 tps: 5.00 qps: 80.98 (r/w/o: 60.99/20.00/0.00) lat (ms,95%): 17.95 err/s: 0.00 reconn/s: 5.00 [ 572s ] queue length: 0, concurrency: 0 [ 573s ] thds: 4 tps: 2.00 qps: 36.01 (r/w/o: 28.01/8.00/0.00) lat (ms,95%): 14.46 err/s: 0.00 reconn/s: 2.00 [ 573s ] queue length: 0, concurrency: 0 Everything is stable again, total impact for the master switch was 3.6 second increase in the latency and no traffic hitting database for 3.6 seconds. Other than that the master switch was transparent to the application. Of course, whether it will be 3.6 seconds or more depends on the environment, traffic and so on but as long as the master switch can be performed under 20 seconds, no error will be returned to the application. Conclusion As you can see, with ClusterControl and ProxySQL 2.0 you are just a couple of clicks from achieving a seamless failover and master switch for your MySQL Replication clusters. Tags:  MySQL percona load balancing cloud proxysql failover
  2. In this post, we will explore one approach to MySQL high availability with ProxySQL, Consul and Orchestrator. This is a follow up to my previous post about a similar architecture but using HAProxy instead. I’ve re-used some of the content from that post so that you don’t have to go read through that one, and have everything you need in here. Let’s briefly go over each piece of the puzzle: – ProxySQL is in charge of connecting the application to the appropriate backend (reader or writer). It can be installed on each application server directly or we can have an intermediate connection layer with one or more ProxySQL servers. The former probably makes sense if you have a small number of application servers; as the number grows, the latter option becomes more attractive. Another scenario for the latter would be to have a “shared” ProxySQL layer that connects applications to different database clusters. – Orchestrator’s role is to monitor the servers and perform automatic (or manual) topology changes as needed. – Consul is used to decouple Orchestrator from ProxySQL, and serves as the source of truth for the topology. Orchestrator will be in charge of updating the identity of the master in Consul when there are topology changes. Why not have Orchestrator update ProxySQL directly? Well, for one, Orchestrator hooks are fired only once… what happens if there is any kind of error or network partition? Also, Orchestrator would need to know ProxySQL admin credentials which might introduce a security issue. – Consul-template runs locally on ProxySQL server(s) and is subscribed to Consul K/V store, and when it detects a change in any value, it will trigger an action. In this case, the action is to propagate the information to ProxySQL by rendering and executing a template (more on this later). Proof of concept With the goal of minimizing the number of servers required for the POC, I installed three servers which run MySQL and Consul servers: mysql1, mysql2 and mysql3. On mysql3, I also installed ProxySQL, Orchestrator and Consul-template. In a real production environment, you’d have servers separated more like this: ProxySQL + consul-template + Consul (client mode) MySQL Orchestrator + Consul (client mode) Consul (server mode) If you have Consul infrastructure already in use in your organization, it is possible to leverage it. A few new K/V pairs is all that’s required. If you are not using Consul already, and don’t plan to use it for anything else, it is often installed on the same servers where Orchestrator will run. This helps reduce the number of servers required for this architecture. Installing Consul Install Consul on mysql1, mysql2 and mysql3: $ sudo yum -y install unzip  $ sudo useradd consul $ sudo mkdir -p /opt/consul $ sudo touch /var/log/consul.log $ cd /opt/consul $ sudo wget $ sudo unzip $ sudo ln -s /opt/consul/consul /usr/local/bin/consul $ sudo chown consul:consul -R /opt/consul* /var/log/consul.log Bootstrap the Consul cluster from one node. I’ve picked mysql3 here: $ sudo vi /etc/consul.conf.json {  "datacenter": "dc1",  "data_dir": "/opt/consul/",  "log_level": "INFO",  "node_name": "mysql3",  "server": true,  "ui": true,  "bootstrap_expect": 3,  "retry_join": [    "",    "",    ""  ],  "client_addr": "",  "advertise_addr": ""   } $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &' Start Consul on mysql1 and have it join the cluster: $ sudo vi /etc/consul.conf.json {  "datacenter": "dc1",  "data_dir": "/opt/consul/",  "log_level": "INFO",  "node_name": "mysql1",    "server": true,  "ui": true,  "bootstrap_expect": 3,  "retry_join": [    "",    "",    ""  ],  "client_addr": "",  "advertise_addr": ""   } $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &' $ consul join Start Consul on mysql2 and have it join the cluster: $ sudo vi /etc/consul.conf.json {  "datacenter": "dc1",  "data_dir": "/opt/consul/",  "log_level": "INFO",  "node_name": "mysql2",   "server": true,  "ui": true,  "bootstrap_expect": 3,  "retry_join": [    "",    "",    ""  ],  "retry_join": ,  "client_addr": "",  "advertise_addr": "" } $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &' $ consul join At this point we have a working three-node Consul cluster. We can test writing k/v pairs to it and retrieving them back: $ consul kv put foo bar Success! Data written to: foo $ consul kv get foo bar Configuring Orchestrator to write to Consul Orchestrator has built-in support for Consul. If Consul lives on separate servers you should still install the Consul agent/client on the Orchestrator machine. This allows for local communication between Orchestrator and the Consul cluster (via the API) to prevent issues during network partitioning scenarios. In our lab example, this is not required as the Consul servers are already present on the local machine. Configure Orchestrator to write to Consul on each master change. Add the following lines to Orchestrator configuration: $ vi /etc/orchestrator.conf.json   "KVClusterMasterPrefix": "mysql/master",  "ConsulAddress": "", Restart Orchestrator: $ service orchestrator restart Populate the current master value manually. We need to tell Orchestrator to populate the values in Consul while bootstrapping the first time. This is accomplished by calling orchestrator-client. Orchestrator will update the values automatically if there is a master change. $ orchestrator-client -c submit-masters-to-kv-stores Check the stored values from command line: $ consul kv get mysql/master/testcluster mysql1:3306 Slave servers can also be stored in Consul; however, they will not be maintained automatically by Orchestrator. We’d need to create an external script that can make use of the Orchestrator API and put this in cron, but this is out of scope for this post. The template I am using below assumes they are kept under mysql/slave/testcluster prefix. Using Consul template to manage ProxySQL We have ProxySQL running on mysql3. The idea is to have the Consul template dynamically update ProxySQL configuration when there are changes to the topology. Install Consul template on mysql3: $ mkdir /opt/consul-template $ cd /opt/consul-template $ sudo wget $ sudo unzip $ sudo ln -s /opt/consul-template/consul-template /usr/local/bin/consul-template Create a template for ProxySQL config file. Note this template also deals with slave servers. $ vi /opt/consul-template/templates/proxysql.ctmpl {{ if keyExists "mysql/master/testcluster/hostname" }} DELETE FROM mysql_servers where hostgroup_id=0; REPLACE into mysql_servers (hostgroup_id, hostname) values ( 0, "{{ key "mysql/master/testcluster/hostname" }}" ); {{ end }} {{ range tree "mysql/slave/testcluster" }} REPLACE into mysql_servers (hostgroup_id, hostname) values ( 1, "{{ .Key }}{{ .Value }}" ); {{ end }} LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK; Create consul-template config file. Note that we need Consul agent, which will provide us with a Consul API endpoint at port 8500, installed locally in order for consul template to subscribe to $ vi /opt/consul-template/config/consul-template.cfg consul {  auth {    enabled = false  }  address = ""  retry {    enabled = true    attempts = 12    backoff = "250ms"    max_backoff = "1m"  }  ssl {    enabled = false  } } reload_signal = "SIGHUP" kill_signal = "SIGINT" max_stale = "10m" log_level = "info" wait {  min = "5s"  max = "10s" } template {  source = "/opt/consul-template/templates/proxysql.ctmpl"  destination = "/opt/consul-template/templates/proxysql.sql" # log in to proxysql and execute the template file containing sql statements to set the new topology  command = "/bin/bash -c 'mysql --defaults-file=/etc/ < /opt/consul-template/templates/proxysql.sql'"  command_timeout = "60s"  perms = 0644  backup = true   wait = "2s:6s" } Start consul-template $ nohup /usr/local/bin/consul-template -config=/opt/consul-template/config/consul-template.cfg > /var/log/consul-template/consul-template.log 2>&1 & The next step is doing a master change (e.g. via Orchestrator GUI) and seeing the effects. Something like this should be present on the logs: [root@mysql3 config]$ tail -f /var/log/consul-template/consul-template.log [INFO] (runner) rendered "/opt/consul-template/templates/proxysql.ctmpl" => "/opt/consul-template/templates/proxysql.sql" [INFO] (runner) executing command "/bin/bash -c 'mysql --defaults-file=/etc/ < /opt/consul-template/templates/proxysql.sql'" from "/opt/consul-template/templates/proxysql.ctmpl" => "/opt/consul-template/templates/proxysql.sql" What happened? Orchestrator updated the K/V in Consul, and Consul template detected the change and generated a .sql file with the commands to update ProxySQL, then executed them. Conclusion ProxySQL, Orchestrator and Consul are a great solution to put together for highly available MySQL clusters. Some assembly is required, but the results will definitely pay off in the long term. If you want to read more about how the benefits of a setup like this, make sure to check out my post about graceful switchover without returning any errors to the application. Also Matthias’ post about autoscaling ProxySQL in the cloud.  
  3. This is a hypothesis, perhaps it is true. I am biased given that I spent 15 years acknowledging tech debt on a vendor-based project (MySQL) and not much time on community-based projects. My theory on tech debt and OSS is that there is more acknowledgement of tech debt in vendor-based projects than community-based ones. This is an advantage for vendor-based projects assuming you are more likely to fix acknowledged problems. Of course there are other advantages for community-based projects.I think there is more acknowledgement of tech debt in vendor-based projects because the community is criticizing someone else's effort rather than their own. This is human nature, even if the effect and behavior aren't always kind. I spent many years marketing bugs that needed to be fixed -- along with many years  leading teams working to fix those bugs.
  4. MySQL Cluster 8.0.18 RC2 was released a few weeks back packed with a set ofnew interesting things.One major change we have done is to increase the number of data nodes from 48 to144. There is also ongoing work to fully support 3 and 4 replicas in MySQLCluster 8.0. NDB has actually always been designed to handle up to 4 replicas.But the test focus have previously been completely focused on 2 replicas. Now weexpanded our test focus to also verify that 3 and 4 replicas work well. This meansthat with NDB 8.0 we will be able to confidently support 3 and 4 replicas.This means that with NDB 8.0 it will be possible to have 48 nodegroups with 3 replicas in each node group in one cluster.The higher number of nodes in a cluster gives the possibility to store even moredata in the cluster. So with 48 node groups it is possible to store 48 TByte ofin-memory data in one NDB Cluster and on top of that one can also haveabout 10x more data in disk data columns. Actually we have successfullymanaged to load 5 TByte of data into a single node using the DBT2 benchmark,so with 8.0 we will be able to store a few hundred TBytes of replicatedin-memory and petabytes of data in disk data columns for key-value storeswith high demands on storage space.Given that we now support so much bigger data sets it is natural that we focuson the ability to load data at high rates, both into in-memory data and intodisk data columns. For in-memory data this was solved already in 7.6 andthere is even more work in this area ongoing in 8.0.We also upped one limitation in NDB where 7.6 have a limitation on row sizesup to 14000 bytes, with NDB 8.0 we can handle 30000 byte row sizes.Another obvious fact is that with so much data in the cluster it is important tobe able to analyze the data as well. Already in MySQL Cluster 7.2 weimplemented a parallel join operator inside of NDB Cluster availablefrom the MySQL Server for NDB tables. We made several importantimprovements of this in 7.6 and even more has happened in NDB 8.0.This graph shows the improvement made to TPC-H queries in 7.6 and in8.0 up until 8.0.18. So chances are good that you will find that some ofyour queries will be executed substantially faster in NDB 8.0 compared toearlier versions. NDB is by design a parallel database machine, so whatwe are doing here is ensuring that this parallel database machine can nowalso be applied for more real-time analytics. We currently have parallelfiltering, parallel projection and parallel join in the data nodes. WithNDB 8.0 we also get all the new features of MySQL 8.0 that provides aset of new features in the query processing area. The main feature added in 8.0.18 for query processing is the ability to pushdownjoin execution of queries where we have conditions of the type t1.a = t2.b.Previously this was only possible for the columns handled by the choosen indexin the join. Now it can be handled for any condition in the query.8.0.18 also introduces a first step of improved memory management where the goalis to make more efficient use of the memory available to NDB data nodes and alsoto make configuration a lot simpler.In NDB 8.0 we have also introduced a parallel backup feature. This means that takinga backup will be much faster than previously and load will be shared on all LDM threads.
  5. I am wary of papers with performance results for too many products.Too many means including results from systems for which you lack expertise. Wary means I have less faith in the comparison even when the ideas in the paper are awesome. I have expertise in MySQL, MongoDB, RocksDB, WiredTiger and InnoDB but even for them I have made and acknowledged ridiculous mistakes.Database configuration is too hard. There are too many options, most of them aren't significant and the approach is bottom-up. I an expert on this -- in addition to years of tuning I have added more than a few options to RocksDB and MySQL.This post was motivated by PostgreSQL. I want to run the insert benchmark for it and need a good configuration. I have nothing against PG with the exception of a few too many why not Postgres comments. The community is strong, docs are great and the product is still improving. But I think PostgreSQL configuration has room to improve -- just like RocksDB (here, here) and MySQL/InnoDB.Too many optionsA non-expert user lacks both the ability to choose good values for options and the ability to understand which options might be useful to set. My solution to too many options and most aren't significant is to use good defaults and split the option name space into two parts -- regular and expert. Regular options are set by most users because they matter for performance and don't have good default values. The amount of memory the DBMS can use is one such option - the default will be small.Everything else is an expert option. These include options for which the default is great and options that rarely impact performance. There is a reason for expert options -- some workloads benefit from their existence and being able to set that option at runtime might avoid downtime. Options are also added early in the lifecycle of new features to allow developers to evaluate the new feature and choose good default values. But such options don't need to be exposed to all users.The benefit from doing this is to avoid presenting a new user with tens or hundreds of options to consider. That is a lousy experience. And while X is too hard isn't always a valid complaint -- language (human and database query) is complex because they let us express complex idea -- I don't think we gain much from the current approach.RocksDB has added functions that simplify configuration and even split the option namespace into two parts -- regular and advanced. This is a step in the right direction but I hope for more. I confirmed that most RocksDB options either have good defaults or aren't significant for my workloads and then published advice on tuning RocksDB.The performance configurations I use for MongoDB/WiredTiger and MySQL/InnoDB are similar to my experience with RocksDB. I don't have to set too many options to get great performance. Alas, it took a long time to figure that out.Top-down configurationTop-down configuration is another approach that can help. The idea is simple - tell the DBMS about the hardware it can use and optionally state a few constraints.The basic hardware configuration is empty which implies the DBMS gets everything it can find -- all memory, all CPU cores, all IO capacity. When a host does more than run a DBMS it should be easy to enforce that limit with one option for memory consumption, one for CPU, etc. The user shouldn't have to set ten options for ten different memory consumers. It is even worse when these limits are per instance -- limiting how much memory each sort buffer gets is a lousy way to manage total memory usage. IO capacity is interesting. AFAIK there was a tool included in RethinkDB that characterized IO capacity, PostgreSQL has a tool for fsync performance and we can't forget fio. But it is easy to be mislead about SSD performance.The constraints cover things that are subjective. What is the max recovery time objective? How do you rank read, write, space and memory efficiency? A great example of this is SQL Memory Management in Oracle 9i -- tell the DBMS how much memory it can use and let it figure out the best way to use it.What about MLI hope that ML makes it easier to discover the options that aren't significant and can be moved into the expert options namespace. But I prefer a solution with fewer tuning knobs, or at least fewer visible tuning knobs. I hope to avoid too many knobs (status quota) combined with ML. Lets make smarter database algorithms. If nothing else this should be a source of research funding, interesting PhDs and many papers worth reading.UpdateWhile I appreciate that someone made the MySQL memory calculator available I wish this weren't needed. Setting memory limits based on peak concurrency means you will under-allocate memory in the normal case or instead you can over-allocate at peak concurrency and get OOM.