Rodomi pranešimai su žymėmis MySQL. Rodyti visus pranešimus
Rodomi pranešimai su žymėmis MySQL. Rodyti visus pranešimus

2023 m. lapkričio 22 d., trečiadienis

mysql top slowest queries by permormance schema

SELECT 

    schema_name, 

    digest_text AS query, 

    COUNT_STAR AS executions, 

    SUM_TIMER_WAIT / 1000000000000 AS total_seconds,

    AVG_TIMER_WAIT / 1000000000000 AS average_seconds,

    MAX_TIMER_WAIT / 1000000000000 AS max_seconds

FROM 

    performance_schema.events_statements_summary_by_digest

WHERE 

    schema_name IS NOT NULL

ORDER BY 

    total_seconds DESC

LIMIT 10;

2023 m. lapkričio 19 d., sekmadienis

mysql memory usage analysis

 SELECT SUBSTRING_INDEX(event_name,'/',2) AS

       code_area, FORMAT_BYTES(SUM(current_alloc))
       AS current_alloc
       FROM sys.x$memory_global_by_current_bytes
       GROUP BY SUBSTRING_INDEX(event_name,'/',2)
       ORDER BY SUM(current_alloc) DESC
code_areacurrent_alloc
memory/innodb4.65 GiB
memory/performance_schema232.48 MiB
memory/sql80.08 MiB
memory/mysys33.07 MiB
memory/temptable2.00 MiB
memory/myisam1.41 MiB
memory/mysqld_openssl152.04 KiB
memory/csv49.84 KiB
memory/blackhole 88 bytes
memory/vio 8 bytes
SELECT CONCAT(FORMAT(A.num * 100.0 / B.num,2),'%') `BufferPool %` FROM
    (SELECT variable_value num FROM performance_schema.global_status
    WHERE variable_name = 'Innodb_buffer_pool_pages_data') A,
    (SELECT variable_value num FROM performance_schema.global_status
    WHERE variable_name = 'Innodb_buffer_pool_pages_total') B
BufferPool %
91.72%

select format_bytes(sum(current_alloc)) from sys.x$memory_global_by_current_bytes;

2023 m. spalio 26 d., ketvirtadienis

mysql block from remote except localhost

sudo iptables -A INPUT -i lo -p tcp --dport 3306 -j ACCEPT

sudo iptables -A INPUT -p tcp --dport 3306 -j DROP 

2022 m. kovo 28 d., pirmadienis

xtrabackup - mysql file level backup

 https://www.digitalocean.com/community/tutorials/how-to-create-hot-backups-of-mysql-databases-with-percona-xtrabackup-on-centos-7 

https://www.percona.com/blog/2020/04/10/percona-xtrabackup-backup-and-restore-of-a-single-table-or-database/


2022 m. vasario 16 d., trečiadienis

mysql connections

show variables like "%max_connections%";show global status  like "%Max_used%";show status like "%thread%"; 

2022 m. vasario 15 d., antradienis

mysql memoty tuning

 https://www.percona.com/blog/2016/05/03/best-practices-for-configuring-optimal-mysql-memory-usage/

mysql memory calculator

 #!/bin/sh# you might want to add some user authentication here

mysql -e "show variables; show status" | awk '  
{
VAR[$1]=$2  
}
END {  
MAX_CONN = VAR["max_connections"]  
MAX_USED_CONN = VAR["Max_used_connections"]  
BASE_MEM=VAR["key_buffer_size"] + VAR["query_cache_size"] + VAR["innodb_buffer_pool_size"] + VAR["innodb_additional_mem_pool_size"] + VAR["innodb_log_buffer_size"]  
MEM_PER_CONN=VAR["read_buffer_size"] + VAR["read_rnd_buffer_size"] + VAR["sort_buffer_size"] + VAR["join_buffer_size"] + VAR["binlog_cache_size"] + VAR["thread_stack"] + VAR["tmp_table_size"]  
MEM_TOTAL_MIN=BASE_MEM + MEM_PER_CONN*MAX_USED_CONN  
MEM_TOTAL_MAX=BASE_MEM + MEM_PER_CONN*MAX_CONN
printf "+------------------------------------------+--------------------+\n"  
printf "| %40s | %15.3f MB |\n", "key_buffer_size", VAR["key_buffer_size"]/1048576  
printf "| %40s | %15.3f MB |\n", "query_cache_size", VAR["query_cache_size"]/1048576  
printf "| %40s | %15.3f MB |\n", "innodb_buffer_pool_size", VAR["innodb_buffer_pool_size"]/1048576  
printf "| %40s | %15.3f MB |\n", "innodb_additional_mem_pool_size", VAR["innodb_additional_mem_pool_size"]/1048576  
printf "| %40s | %15.3f MB |\n", "innodb_log_buffer_size", VAR["innodb_log_buffer_size"]/1048576  
printf "+------------------------------------------+--------------------+\n"  
printf "| %40s | %15.3f MB |\n", "BASE MEMORY", BASE_MEM/1048576  
printf "+------------------------------------------+--------------------+\n"  
printf "| %40s | %15.3f MB |\n", "sort_buffer_size", VAR["sort_buffer_size"]/1048576  
printf "| %40s | %15.3f MB |\n", "read_buffer_size", VAR["read_buffer_size"]/1048576  
printf "| %40s | %15.3f MB |\n", "read_rnd_buffer_size", VAR["read_rnd_buffer_size"]/1048576  
printf "| %40s | %15.3f MB |\n", "join_buffer_size", VAR["join_buffer_size"]/1048576  
printf "| %40s | %15.3f MB |\n", "thread_stack", VAR["thread_stack"]/1048576  
printf "| %40s | %15.3f MB |\n", "binlog_cache_size", VAR["binlog_cache_size"]/1048576  
printf "| %40s | %15.3f MB |\n", "tmp_table_size", VAR["tmp_table_size"]/1048576  
printf "+------------------------------------------+--------------------+\n"  
printf "| %40s | %15.3f MB |\n", "MEMORY PER CONNECTION", MEM_PER_CONN/1048576  
printf "+------------------------------------------+--------------------+\n"  
printf "| %40s | %18d |\n", "Max_used_connections", MAX_USED_CONN  
printf "| %40s | %18d |\n", "max_connections", MAX_CONN  
printf "+------------------------------------------+--------------------+\n"  
printf "| %40s | %15.3f MB |\n", "TOTAL (MIN)", MEM_TOTAL_MIN/1048576  
printf "| %40s | %15.3f MB |\n", "TOTAL (MAX)", MEM_TOTAL_MAX/1048576  
printf "+------------------------------------------+--------------------+\n"  
}'

2021 m. gruodžio 22 d., trečiadienis

mysql dump import pagreitintuvas

  #!/bin/bash


DATA="/db.sql.gz"

SIZE="$(du $DATA | awk '{ print $1 }')K"

DBNAME="test"

DBPASS="Apasdasdasdfnti"

DBUSER="root"


{

    #echo "Dropping/creating database ${DBNAME}" >&2

    #echo "DROP DATABASE IF EXISTS ${DBNAME};"

    #echo "CREATE DATABASE ${DBNAME};"

    echo "use ${DBNAME};"


    echo "Setting speed settings" >&2

    echo "SET FOREIGN_KEY_CHECKS = 0;"

    echo "SET UNIQUE_CHECKS = 0;"

    echo "SET AUTOCOMMIT = 0;"



    echo "Loading data (this will take a while)" >&2

    cat $DATA | gzip -d


    echo "Resetting speed settings" >&2

    echo "SET FOREIGN_KEY_CHECKS = 1;"

    echo "SET UNIQUE_CHECKS = 1;"

    echo "COMMIT;"


    echo "Done." >&2


} | pv -ps $SIZE | mysql --password=$DBPASS --user=$DBUSER



#paprastas 1 eilutes pvz    

# pv /home/dnsb/dbtest.sql.gz | gunzip | mysql --password=Apdasdadti --user=root dbname

# pv dump.sql.gz | zcat | mysql -u user -ppasswd database

#nblogask itas pvz  https://github.com/servercollective/import-large-mysql-dump/blob/master/import.sh


#ddl="$ddl set global net_buffer_length=1000000; "

#ddl="$ddl set global max_allowed_packet=1000000000; "

2018 m. vasario 27 d., antradienis

mysql backup mysqlduml for large data

mysqldum lėtina tai, kad vienu metu ir skaitoma ir rašoma.
Rašymą kiek sumažina jei naudojam gzip.
Taip pat galima naudoti papildomai pv komandą, kuri taiko didesnį rašymo buferį. Taip pat rodo progressbarą.
Pernešimui į kitą serverį naudoti rsync, kuris nutrūkus ryšiui geba pratęsti siuntimąsi.
mysql dump "-T " - bakupina lenteles į atskirus failus

Taip pat glaima dumpint i kita serva tiesiogiai
To listen for an incoming dump on one host run:
nc -l 7878 > mysql-dump.sql
Then on your DB host, run
mysqldump $OPTS | nc myhost.mydomain.com 7878

2017 m. kovo 30 d., ketvirtadienis

mysql panasumo matavimas su levenshtein

The Levenshtein distance between two strings is the minimum number of operations needed to transform one string into the other, where an operation may be insertion, deletion or substitution of one character. Jason Rust published this MySQL algorithm for it at http://www.codejanitor.com/wp/.

CREATE FUNCTION levenshtein( s1 VARCHAR(255), s2 VARCHAR(255) )
  RETURNS INT
  DETERMINISTIC
  BEGIN
    DECLARE s1_len, s2_len, i, j, c, c_temp, cost INT;
    DECLARE s1_char CHAR;
    -- max strlen=255
    DECLARE cv0, cv1 VARBINARY(256);
    SET s1_len = CHAR_LENGTH(s1), s2_len = CHAR_LENGTH(s2), cv1 = 0x00, j = 1, i = 1, c = 0;
    IF s1 = s2 THEN
      RETURN 0;
    ELSEIF s1_len = 0 THEN
      RETURN s2_len;
    ELSEIF s2_len = 0 THEN
      RETURN s1_len;
    ELSE
      WHILE j <= s2_len DO
        SET cv1 = CONCAT(cv1, UNHEX(HEX(j))), j = j + 1;
      END WHILE;
      WHILE i <= s1_len DO
        SET s1_char = SUBSTRING(s1, i, 1), c = i, cv0 = UNHEX(HEX(i)), j = 1;
        WHILE j <= s2_len DO
          SET c = c + 1;
          IF s1_char = SUBSTRING(s2, j, 1) THEN 
            SET cost = 0; ELSE SET cost = 1;
          END IF;
          SET c_temp = CONV(HEX(SUBSTRING(cv1, j, 1)), 16, 10) + cost;
          IF c > c_temp THEN SET c = c_temp; END IF;
            SET c_temp = CONV(HEX(SUBSTRING(cv1, j+1, 1)), 16, 10) + 1;
            IF c > c_temp THEN 
              SET c = c_temp; 
            END IF;
            SET cv0 = CONCAT(cv0, UNHEX(HEX(c))), j = j + 1;
        END WHILE;
        SET cv1 = cv0, i = i + 1;
      END WHILE;
    END IF;
    RETURN c;
  END; 

Helper function:
 
CREATE FUNCTION levenshtein_ratio( s1 VARCHAR(255), s2 VARCHAR(255) )
  RETURNS INT
  DETERMINISTIC
  BEGIN
    DECLARE s1_len, s2_len, max_len INT;
    SET s1_len = LENGTH(s1), s2_len = LENGTH(s2);
    IF s1_len > s2_len THEN 
      SET max_len = s1_len; 
    ELSE 
      SET max_len = s2_len; 
    END IF;
    RETURN ROUND((1 - LEVENSHTEIN(s1, s2) / max_len) * 100);
  END; 





You have to override your ; delimiter with something like $$ to avoid this kind of error.
After your function definition, you can set the delimiter back to ;.
This should work:

DELIMITER $$
CREATE FUNCTION F_Dist3D (x1 decimal, y1 decimal) 
RETURNS decimal
DETERMINISTIC
BEGIN 
  DECLARE dist decimal;
  SET dist = SQRT(x1 - y1);
  RETURN dist;
END$$
DELIMITER ;

2017 m. vasario 10 d., penktadienis

mysql config tuning

https://github.com/major/MySQLTuner-perl

http://www.mysqlcalculator.com/

http://www.tecmint.com/mysql-mariadb-performance-tuning-and-optimization/3/

http://stackoverflow.com/questions/13259275/mysql-tmp-table-size-max-heap-table-size

mysql replication

MHA tool - managing master/slave
https://code.google.com/p/mysql-master-ha/
https://mysqlstepbystep.com/2015/06/01/mysql-high-available-with-mha-2/

master/slave tipologija
https://severalnines.com/resources/tutorials/mysql-replication-high-availability-tutorial

tutorials
http://www.servermom.org/master-slave-mysql-replication-tutorial/
https://www.digitalocean.com/community/tutorials/how-to-set-up-master-slave-replication-in-mysql
http://dba.stackexchange.com/questions/8680/what-is-the-best-way-to-create-mysql-master-slave-replication-setup-and-troubles
http://www.tecmint.com/how-to-setup-mysql-master-slave-replication-in-rhel-centos-fedora/
http://www.techrepublic.com/blog/tr-dojo/set-up-mysql-database-replication-to-ensure-up-to-date-backups/

On master, lock the database read-only, flush everything.
On slave, stop the mysqld daemon (very important!!!)
rsync the data store directory from the master to the slave
start the slave
unlock the master.

2017 m. sausio 30 d., pirmadienis

UUID

mysql uuid() arba PHP https://gist.github.com/dahnielson/508447/b271e314c3c3190b0e31bc7f2f4f35da3c04f91c

Version 1 (date-time and MAC address)
Version 2 (date-time and MAC Address, DCE security version)
Versions 3 and 5 (namespace name-based)
Version 4 (random)

2016 m. spalio 20 d., ketvirtadienis

MySQL partitioning by column

ALTER TABLE `products` DROP PRIMARY KEY , ADD PRIMARY KEY ( `id` , `category` );
ALTER TABLE `products` PARTITION BY KEY(category) PARTITIONS 6;

2016 m. sausio 27 d., trečiadienis