忘忧的小站

  • 首页
  • 文章归档
  • 日志
  • 关于页面

  • 搜索
异常处理 AutoWrapper 入门 NoSql 数据库 sqlserver 1 分布式索引 索引 全文搜索 Lucene.Net GPS 音视频 过滤 AOP 时区 升级 ABP.Zero 数据备份 linux 阿里云盘 aliyunpan 面试题 Signalr S 汉字 css html 前端 拼音键盘 在线键盘 uniapp .Net Core XMLRPC Serilog LOKI Nlog 分布式日志 加密 总结 人生 Asp.Net Core Swagger Web Element-plus Quasar 匹配 JavaScript 正则 .Net 后台 架构师 Redis EF CORE MySQL 自考 英语 集群 Jenkins CI/DI 内网穿透 代理 ABP 学习 后端 软考

MySql入门:性能优化与调优

发表于 2025-05-11 | 分类于 数据库 | 0 | 阅读次数 15877

MySQL性能优化与调优

数据库性能是系统整体性能的基石。一个优化良好的MySQL数据库可以轻松应对高并发场景,而配置不当的数据库则可能成为整个系统的瓶颈。今天,我们将深入探讨MySQL性能优化的各个方面,从查询优化到服务器配置,帮助你构建高性能的数据库系统。

1. 查询性能优化

EXPLAIN执行计划深度解读

EXPLAIN输出详解:

-- 基本EXPLAIN使用
EXPLAIN 
SELECT e.emp_name, d.dept_name, p.project_name
FROM employees e
JOIN departments d ON e.dept_id = d.dept_id
JOIN projects p ON e.emp_id = p.manager_id
WHERE e.salary > 5000
  AND d.location = '北京'
ORDER BY e.hire_date DESC
LIMIT 100;

-- EXPLAIN输出字段解析
CREATE TABLE explain_analysis (
    id INT PRIMARY KEY,
    select_type VARCHAR(50) COMMENT '查询类型',
    table_name VARCHAR(64) COMMENT '表名',
    partitions VARCHAR(255) COMMENT '匹配的分区',
    type VARCHAR(30) COMMENT '连接类型',
    possible_keys VARCHAR(255) COMMENT '可能使用的索引',
    key VARCHAR(255) COMMENT '实际使用的索引',
    key_len INT COMMENT '使用的索引长度',
    ref VARCHAR(255) COMMENT '与索引比较的列',
    rows INT COMMENT '估计要检查的行数',
    filtered DECIMAL(5,2) COMMENT '按条件过滤的行百分比',
    extra VARCHAR(255) COMMENT '额外信息'
);

执行计划类型深度分析:

-- 不同类型的执行计划示例

-- 1. system & const(最优)
EXPLAIN SELECT * FROM departments WHERE dept_id = 1;
-- type: const,通过主键或唯一索引查找

-- 2. eq_ref(优秀)
EXPLAIN SELECT * FROM employees e 
JOIN departments d ON e.dept_id = d.dept_id;
-- type: eq_ref,对于前表的每一行,后表只有一行匹配

-- 3. ref(良好)
EXPLAIN SELECT * FROM employees WHERE dept_id = 1;
-- type: ref,使用非唯一索引查找

-- 4. range(良好)
EXPLAIN SELECT * FROM employees 
WHERE hire_date BETWEEN '2020-01-01' AND '2023-01-01';
-- type: range,索引范围扫描

-- 5. index(中等)
EXPLAIN SELECT dept_id FROM employees;
-- type: index,全索引扫描

-- 6. ALL(最差)
EXPLAIN SELECT * FROM employees WHERE salary > 5000;
-- type: ALL,全表扫描(如果没有合适的索引)

EXPLAIN FORMAT=JSON深度分析:

-- 使用JSON格式获取更详细的执行计划
EXPLAIN FORMAT=JSON
SELECT 
    e.emp_name,
    d.dept_name,
    COUNT(p.project_id) as project_count
FROM employees e
JOIN departments d ON e.dept_id = d.dept_id
LEFT JOIN projects p ON e.emp_id = p.manager_id
WHERE e.hire_date >= '2020-01-01'
  AND d.budget > 100000
GROUP BY e.emp_id, e.emp_name, d.dept_name
HAVING project_count >= 2
ORDER BY e.salary DESC;

-- 解析JSON执行计划的关键信息
SELECT 
    JSON_EXTRACT(
        (EXPLAIN FORMAT=JSON 
         SELECT * FROM employees WHERE dept_id = 1),
        '$.query_block.cost_info.query_cost'
    ) as query_cost;

慢查询日志分析与优化

慢查询配置:

-- 查看慢查询配置
SHOW VARIABLES LIKE 'slow_query_log%';
SHOW VARIABLES LIKE 'long_query_time';
SHOW VARIABLES LIKE 'min_examined_row_limit';
SHOW VARIABLES LIKE 'log_queries_not_using_indexes';

-- 动态配置慢查询(无需重启)
SET GLOBAL slow_query_log = 1;
SET GLOBAL long_query_time = 1.0;  -- 1秒
SET GLOBAL min_examined_row_limit = 100;
SET GLOBAL log_queries_not_using_indexes = 1;

-- 永久配置(在my.cnf中)
/*
[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1.0
log_queries_not_using_indexes = 1
min_examined_row_limit = 100
log_slow_admin_statements = 1
*/

慢查询日志分析:

-- 使用pt-query-digest分析慢查询日志(外部工具)
-- pt-query-digest /var/log/mysql/slow.log > slow_report.txt

-- 使用MySQL自身分析慢查询
CREATE TABLE slow_log_analysis (
    query_time DECIMAL(10,6),
    lock_time DECIMAL(10,6),
    rows_sent INT,
    rows_examined INT,
    db VARCHAR(512),
    query TEXT,
    timestamp TIMESTAMP
);

-- 将慢查询日志导入表中分析
LOAD DATA INFILE '/var/log/mysql/slow.log'
INTO TABLE slow_log_analysis
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n';

-- 分析慢查询模式
SELECT 
    LEFT(query, 100) as query_sample,
    COUNT(*) as query_count,
    AVG(query_time) as avg_time,
    AVG(rows_examined) as avg_rows_examined,
    AVG(rows_sent) as avg_rows_sent
FROM slow_log_analysis
GROUP BY LEFT(query, 100)
ORDER BY avg_time DESC
LIMIT 10;

常见慢查询模式及优化:

-- 1. 全表扫描优化
-- 慢查询
SELECT * FROM employees WHERE YEAR(hire_date) = 2023;

-- 优化后
SELECT * FROM employees 
WHERE hire_date BETWEEN '2023-01-01' AND '2023-12-31';
-- 添加索引:ALTER TABLE employees ADD INDEX idx_hire_date (hire_date);

-- 2. OR条件优化
-- 慢查询
SELECT * FROM employees 
WHERE dept_id = 1 OR dept_id = 2 OR salary > 10000;

-- 优化后
SELECT * FROM employees WHERE dept_id IN (1, 2)
UNION
SELECT * FROM employees WHERE salary > 10000;

-- 3. 分页优化
-- 慢查询(偏移量大时)
SELECT * FROM employees ORDER BY emp_id LIMIT 10000, 20;

-- 优化后
SELECT * FROM employees 
WHERE emp_id > 10000 
ORDER BY emp_id 
LIMIT 20;

-- 4. LIKE优化
-- 慢查询
SELECT * FROM employees WHERE emp_name LIKE '%张%';

-- 优化方案
-- 使用全文索引或添加反转索引
ALTER TABLE employees ADD FULLTEXT idx_name_ft (emp_name);
SELECT * FROM employees 
WHERE MATCH(emp_name) AGAINST('张' IN BOOLEAN MODE);

查询重写技巧与模式

查询重写模式:

-- 1. 使用EXISTS替代IN(当子查询数据量大时)
-- 原始查询
SELECT * FROM employees 
WHERE dept_id IN (SELECT dept_id FROM departments WHERE budget > 100000);

-- 优化后
SELECT e.* FROM employees e
WHERE EXISTS (
    SELECT 1 FROM departments d 
    WHERE d.dept_id = e.dept_id AND d.budget > 100000
);

-- 2. 使用JOIN替代子查询
-- 原始查询
SELECT emp_name, 
       (SELECT dept_name FROM departments WHERE dept_id = employees.dept_id) as dept_name
FROM employees;

-- 优化后
SELECT e.emp_name, d.dept_name
FROM employees e
LEFT JOIN departments d ON e.dept_id = d.dept_id;

-- 3. 避免SELECT *
-- 原始查询
SELECT * FROM employees WHERE dept_id = 1;

-- 优化后
SELECT emp_id, emp_name, email, salary 
FROM employees WHERE dept_id = 1;

-- 4. 使用批处理替代循环
-- 不好的做法(在应用程序中循环)
-- FOR each id IN ids: 
--   UPDATE employees SET status = 1 WHERE emp_id = id

-- 好的做法
UPDATE employees SET status = 1 WHERE emp_id IN (1, 2, 3, 4, 5);

高级查询重写:

-- 5. 使用派生表优化复杂查询
-- 原始复杂查询
SELECT 
    e.emp_name,
    d.dept_name,
    (SELECT COUNT(*) FROM projects p WHERE p.manager_id = e.emp_id) as project_count
FROM employees e
JOIN departments d ON e.dept_id = d.dept_id
WHERE e.salary > (SELECT AVG(salary) FROM employees WHERE dept_id = e.dept_id);

-- 优化后使用派生表
WITH dept_stats AS (
    SELECT 
        dept_id,
        AVG(salary) as avg_salary
    FROM employees
    GROUP BY dept_id
),
emp_projects AS (
    SELECT 
        manager_id,
        COUNT(*) as project_count
    FROM projects
    GROUP BY manager_id
)
SELECT 
    e.emp_name,
    d.dept_name,
    COALESCE(ep.project_count, 0) as project_count
FROM employees e
JOIN departments d ON e.dept_id = d.dept_id
JOIN dept_stats ds ON e.dept_id = ds.dept_id
LEFT JOIN emp_projects ep ON e.emp_id = ep.manager_id
WHERE e.salary > ds.avg_salary;

-- 6. 条件顺序优化
-- 原始查询(选择性差的条件在前)
SELECT * FROM employees 
WHERE status = 1  -- 可能80%的数据status=1
  AND hire_date > '2023-01-01'  -- 只有5%的数据满足
  AND dept_id = 2;  -- 只有10%的数据满足

-- 优化后(选择性好的条件在前)
SELECT * FROM employees 
WHERE dept_id = 2  -- 先过滤掉90%的数据
  AND hire_date > '2023-01-01'  -- 在剩下的10%中再过滤
  AND status = 1;  -- 最后过滤

分页查询优化方案

传统分页的问题:

-- 问题:偏移量大时性能差
SELECT * FROM employees 
ORDER BY emp_id 
LIMIT 100000, 20;  -- 需要扫描100000+20行

-- 使用索引覆盖优化
SELECT emp_id, emp_name, email  -- 只选择需要的列
FROM employees 
ORDER BY emp_id 
LIMIT 100000, 20;

-- 添加覆盖索引
ALTER TABLE employees ADD INDEX idx_cover (emp_id, emp_name, email);

高效分页方案:

-- 方案1:游标分页(推荐)
-- 第一页
SELECT * FROM employees 
ORDER BY emp_id 
LIMIT 20;

-- 获取最后一条记录的emp_id: 比如是20
-- 第二页
SELECT * FROM employees 
WHERE emp_id > 20  -- 使用游标
ORDER BY emp_id 
LIMIT 20;

-- 方案2:使用子查询(MySQL 8.0+)
SELECT * FROM employees 
WHERE emp_id IN (
    SELECT emp_id FROM employees 
    ORDER BY emp_id 
    LIMIT 100000, 20
)
ORDER BY emp_id;

-- 方案3:延迟关联
SELECT e.* 
FROM employees e
JOIN (
    SELECT emp_id 
    FROM employees 
    ORDER BY emp_id 
    LIMIT 100000, 20
) tmp ON e.emp_id = tmp.emp_id;

-- 方案4:业务分页(按时间范围)
SELECT * FROM employees 
WHERE hire_date BETWEEN '2023-01-01' AND '2023-12-31'
ORDER BY emp_id 
LIMIT 20;

复杂分页场景:

-- 多条件排序分页
CREATE INDEX idx_dept_hire_salary ON employees (dept_id, hire_date, salary);

-- 游标分页实现
-- 第一页
SELECT * FROM employees 
WHERE dept_id = 1
ORDER BY hire_date DESC, salary DESC, emp_id
LIMIT 20;

-- 假设最后一条记录:hire_date='2023-05-20', salary=8000, emp_id=150
-- 第二页
SELECT * FROM employees 
WHERE dept_id = 1
  AND (
      hire_date < '2023-05-20' 
      OR (hire_date = '2023-05-20' AND salary < 8000)
      OR (hire_date = '2023-05-20' AND salary = 8000 AND emp_id > 150)
  )
ORDER BY hire_date DESC, salary DESC, emp_id
LIMIT 20;

大数据量查询处理策略

分批处理策略:

-- 大量数据更新分批处理
DELIMITER //

CREATE PROCEDURE BatchUpdateEmployees()
BEGIN
    DECLARE done INT DEFAULT 0;
    DECLARE batch_size INT DEFAULT 1000;
    DECLARE current_id INT DEFAULT 0;
    DECLARE max_id INT;
    
    -- 获取最大ID
    SELECT MAX(emp_id) INTO max_id FROM employees;
    
    WHILE current_id < max_id DO
        -- 分批更新
        UPDATE employees 
        SET last_updated = NOW()
        WHERE emp_id > current_id 
          AND emp_id <= current_id + batch_size;
        
        -- 记录处理进度
        INSERT INTO batch_process_log (process_name, processed_id, processed_at)
        VALUES ('employee_update', current_id + batch_size, NOW());
        
        SET current_id = current_id + batch_size;
        
        -- 短暂休眠,减少对系统的影响
        DO SLEEP(0.1);
    END WHILE;
    
END //

DELIMITER ;

数据归档策略:

-- 历史数据归档
CREATE TABLE employees_archive LIKE employees;

-- 归档过程
DELIMITER //

CREATE PROCEDURE ArchiveOldEmployees(IN cutoff_date DATE)
BEGIN
    DECLARE EXIT HANDLER FOR SQLEXCEPTION
    BEGIN
        ROLLBACK;
        RESIGNAL;
    END;
    
    START TRANSACTION;
    
    -- 归档数据
    INSERT INTO employees_archive 
    SELECT * FROM employees 
    WHERE hire_date < cutoff_date;
    
    -- 删除已归档数据
    DELETE FROM employees 
    WHERE hire_date < cutoff_date;
    
    COMMIT;
    
END //

DELIMITER ;

2. 索引优化实战

索引创建策略与原则

索引选择策略:

-- 索引创建决策流程
SELECT 
    TABLE_NAME,
    COLUMN_NAME,
    DATA_TYPE,
    IS_NULLABLE,
    COLUMN_DEFAULT,
    CHARACTER_MAXIMUM_LENGTH,
    NUMERIC_PRECISION,
    NUMERIC_SCALE
FROM INFORMATION_SCHEMA.COLUMNS 
WHERE TABLE_SCHEMA = 'your_database'
  AND TABLE_NAME = 'your_table';

-- 计算索引选择性
SELECT 
    COUNT(DISTINCT dept_id) as distinct_values,
    COUNT(*) as total_rows,
    ROUND(COUNT(DISTINCT dept_id) / COUNT(*) * 100, 2) as selectivity_percent
FROM employees;

-- 选择性建议:
-- > 20%:适合创建索引
-- 5%-20%:根据查询频率决定
-- < 5%:通常不适合单独创建索引

复合索引设计:

-- 好的复合索引设计
CREATE TABLE orders (
    order_id BIGINT PRIMARY KEY,
    customer_id BIGINT NOT NULL,
    order_date DATETIME NOT NULL,
    status ENUM('pending', 'paid', 'shipped', 'delivered') NOT NULL,
    total_amount DECIMAL(12,2) NOT NULL,
    -- 其他字段...
    
    -- 复合索引设计
    INDEX idx_customer_date (customer_id, order_date),           -- 客户订单查询
    INDEX idx_date_status (order_date, status),                  -- 按状态查询订单
    INDEX idx_status_date (status, order_date),                  -- 状态+日期查询
    INDEX idx_customer_status_date (customer_id, status, order_date) -- 覆盖多种查询
);

-- 索引使用分析
EXPLAIN SELECT * FROM orders 
WHERE customer_id = 123 
  AND order_date >= '2023-01-01'
  AND status = 'shipped';
-- 可能使用 idx_customer_date 或 idx_customer_status_date

索引失效场景分析

常见索引失效场景:

-- 1. 对索引列使用函数
-- 索引失效
SELECT * FROM employees WHERE YEAR(hire_date) = 2023;
SELECT * FROM employees WHERE UPPER(emp_name) = 'ZHANGSAN';

-- 优化后
SELECT * FROM employees WHERE hire_date BETWEEN '2023-01-01' AND '2023-12-31';
SELECT * FROM employees WHERE emp_name = 'zhangsan';  -- 应用层处理大小写

-- 2. 隐式类型转换
-- 索引失效(如果phone是varchar类型)
SELECT * FROM users WHERE phone = 13800138000;

-- 优化后
SELECT * FROM users WHERE phone = '13800138000';

-- 3. OR条件使用不当
-- 索引可能失效
SELECT * FROM employees 
WHERE dept_id = 1 OR emp_name LIKE '张%';

-- 优化后
SELECT * FROM employees WHERE dept_id = 1
UNION
SELECT * FROM employees WHERE emp_name LIKE '张%';

-- 4. 使用NOT、!=、<> 
-- 索引通常失效
SELECT * FROM employees WHERE dept_id != 1;

-- 优化方案:考虑是否真的需要排除,或者使用范围查询
SELECT * FROM employees WHERE dept_id > 1 OR dept_id < 1;

-- 5. LIKE以通配符开头
-- 索引失效
SELECT * FROM employees WHERE emp_name LIKE '%张%';

-- 优化方案:使用全文索引或反转存储

复合索引失效场景:

-- 创建测试索引
CREATE INDEX idx_dept_hire_salary ON employees (dept_id, hire_date, salary);

-- 1. 不满足最左前缀原则
-- 索引失效
SELECT * FROM employees WHERE hire_date > '2023-01-01';
-- 只能使用hire_date的索引,不能使用复合索引

-- 2. 跳过中间列
-- 部分使用索引(只用到dept_id)
SELECT * FROM employees 
WHERE dept_id = 1 AND salary > 5000;

-- 3. 范围查询后的列无法使用索引
-- 只使用dept_id和hire_date进行索引查找,salary使用索引过滤
SELECT * FROM employees 
WHERE dept_id = 1 
  AND hire_date > '2023-01-01'
  AND salary > 5000;

-- 4. 索引列顺序影响
-- 好的顺序:等值查询列在前,范围查询列在后
CREATE INDEX idx_good_order ON employees (dept_id, salary, hire_date);

前缀索引与函数索引

前缀索引:

-- 文本列的前缀索引
-- 计算合适的前缀长度
SELECT 
    COUNT(DISTINCT LEFT(emp_name, 5)) as distinct_5,
    COUNT(DISTINCT LEFT(emp_name, 10)) as distinct_10,
    COUNT(DISTINCT emp_name) as distinct_full,
    COUNT(*) as total_rows
FROM employees;

-- 创建前缀索引
CREATE INDEX idx_emp_name_prefix ON employees (emp_name(10));

-- 前缀索引的使用限制
-- 不能用于ORDER BY和GROUP BY完整列
-- 不能用于覆盖索引

-- 查看索引统计信息
ANALYZE TABLE employees;
SHOW INDEX FROM employees;

函数索引(MySQL 8.0+):

-- 创建函数索引
CREATE TABLE products (
    product_id INT PRIMARY KEY,
    product_name VARCHAR(200),
    product_data JSON,
    created_at DATESTAMP
);

-- 函数索引示例
CREATE INDEX idx_product_name_upper 
ON products ((UPPER(product_name)));

CREATE INDEX idx_product_price 
ON products ((CAST(JSON_EXTRACT(product_data, '$.price') AS DECIMAL(10,2))));

CREATE INDEX idx_created_date 
ON products ((DATE(created_at)));

-- 使用函数索引查询
SELECT * FROM products 
WHERE UPPER(product_name) = UPPER('iPhone 14');

SELECT * FROM products 
WHERE CAST(JSON_EXTRACT(product_data, '$.price') AS DECIMAL(10,2)) > 1000;

SELECT * FROM products 
WHERE DATE(created_at) = '2023-01-01';

索引维护与重建策略

索引监控:

-- 监控索引使用情况
SELECT 
    OBJECT_SCHEMA,
    OBJECT_NAME,
    INDEX_NAME,
    COUNT_FETCH,
    COUNT_INSERT,
    COUNT_UPDATE,
    COUNT_DELETE
FROM performance_schema.table_io_waits_summary_by_index_usage
WHERE OBJECT_SCHEMA = 'your_database'
ORDER BY COUNT_FETCH DESC;

-- 查找未使用的索引
SELECT 
    OBJECT_SCHEMA,
    OBJECT_NAME,
    INDEX_NAME
FROM performance_schema.table_io_waits_summary_by_index_usage
WHERE INDEX_NAME IS NOT NULL
  AND COUNT_FETCH = 0
  AND COUNT_INSERT = 0
  AND COUNT_UPDATE = 0
  AND COUNT_DELETE = 0;

索引维护操作:

-- 索引重建
ALTER TABLE employees ENGINE=InnoDB;  -- 重建表,包括所有索引
ALTER TABLE employees DROP INDEX idx_old, ADD INDEX idx_new (columns);
OPTIMIZE TABLE employees;  -- 重建表,整理碎片

-- 在线索引操作(MySQL 5.6+)
ALTER TABLE employees 
ADD INDEX idx_new_column (new_column),
ALGORITHM=INPLACE, 
LOCK=NONE;

-- 索引统计信息更新
ANALYZE TABLE employees;  -- 更新统计信息

-- 监控索引大小
SELECT 
    TABLE_NAME,
    INDEX_NAME,
    ROUND(SUM(INDEX_LENGTH) / 1024 / 1024, 2) as index_size_mb
FROM INFORMATION_SCHEMA.TABLES 
WHERE TABLE_SCHEMA = 'your_database'
GROUP BY TABLE_NAME, INDEX_NAME
ORDER BY index_size_mb DESC;

全文索引与空间索引应用

全文索引:

-- 创建全文索引
ALTER TABLE articles ADD FULLTEXT idx_content_ft (title, content);

-- 全文索引查询
SELECT 
    article_id,
    title,
    MATCH(title, content) AGAINST('数据库 优化') as relevance_score
FROM articles
WHERE MATCH(title, content) AGAINST('+数据库 +优化' IN BOOLEAN MODE)
ORDER BY relevance_score DESC;

-- 全文索引配置
-- 查看最小词长
SHOW VARIABLES LIKE 'innodb_ft_min_token_size';
SHOW VARIABLES LIKE 'innodb_ft_max_token_size';

-- 停用词配置
SHOW VARIABLES LIKE 'innodb_ft_enable_stopword';
SHOW VARIABLES LIKE 'innodb_ft_server_stopword_table';

-- 重建全文索引
ALTER TABLE articles DROP INDEX idx_content_ft;
ALTER TABLE articles ADD FULLTEXT idx_content_ft (title, content);

空间索引:

-- 创建空间数据表
CREATE TABLE locations (
    location_id INT PRIMARY KEY AUTO_INCREMENT,
    location_name VARCHAR(100),
    coordinates POINT NOT NULL,
    area POLYGON,
    SPATIAL INDEX idx_coordinates (coordinates),
    SPATIAL INDEX idx_area (area)
);

-- 空间索引查询
SELECT 
    location_name,
    ST_AsText(coordinates) as coords
FROM locations
WHERE ST_Contains(
    ST_GeomFromText('POLYGON((116.3 39.8, 116.5 39.8, 116.5 40.0, 116.3 40.0, 116.3 39.8))'),
    coordinates
);

-- 距离查询优化
SELECT 
    l1.location_name as place1,
    l2.location_name as place2,
    ST_Distance_Sphere(l1.coordinates, l2.coordinates) as distance
FROM locations l1
JOIN locations l2 ON l1.location_id < l2.location_id
WHERE ST_Distance_Sphere(l1.coordinates, l2.coordinates) < 5000;  -- 5公里内

3. 服务器参数调优

内存参数优化

InnoDB缓冲池优化:

-- 查看当前缓冲池配置
SHOW VARIABLES LIKE 'innodb_buffer_pool_size';
SHOW VARIABLES LIKE 'innodb_buffer_pool_instances';
SHOW VARIABLES LIKE 'innodb_buffer_pool_chunk_size';

-- 计算合适的缓冲池大小
SELECT 
    @@innodb_buffer_pool_size / 1024 / 1024 / 1024 as current_buffer_pool_gb,
    @@innodb_buffer_pool_instances as buffer_pool_instances;

-- 缓冲池使用监控
SHOW ENGINE INNODB STATUS\G
-- 查看 BUFFER POOL AND MEMORY 部分

-- 在线调整缓冲池(MySQL 5.7+)
SET GLOBAL innodb_buffer_pool_size = 8589934592;  -- 8GB

-- 监控缓冲池命中率
SELECT 
    (1 - (variable_value / (
        SELECT variable_value 
        FROM information_schema.global_status 
        WHERE variable_name = 'innodb_buffer_pool_read_requests'
    ))) * 100 as buffer_pool_hit_rate
FROM information_schema.global_status 
WHERE variable_name = 'innodb_buffer_pool_reads';

内存参数配置建议:

# my.cnf 内存配置示例
[mysqld]
# 缓冲池配置(建议为系统内存的70-80%)
innodb_buffer_pool_size = 16G
innodb_buffer_pool_instances = 8
innodb_buffer_pool_chunk_size = 128M

# 日志缓冲区
innodb_log_buffer_size = 256M

# 排序缓冲区
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 2M
join_buffer_size = 2M

# 临时表
tmp_table_size = 64M
max_heap_table_size = 64M

# 连接内存
thread_cache_size = 100
table_open_cache = 4000
table_definition_cache = 2000

I/O参数调优

InnoDB I/O优化:

-- 查看I/O相关配置
SHOW VARIABLES LIKE 'innodb_io_capacity%';
SHOW VARIABLES LIKE 'innodb_flush_log_at_trx_commit';
SHOW VARIABLES LIKE 'innodb_flush_method';
SHOW VARIABLES LIKE 'innodb_read_io_threads';
SHOW VARIABLES LIKE 'innodb_write_io_threads';

-- I/O性能监控
SHOW STATUS LIKE 'innodb%io%';

I/O参数配置建议:

# my.cnf I/O配置示例
[mysqld]
# I/O容量(根据存储设备性能调整)
# SSD: 2000-10000, HDD: 200-500
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000

# 日志刷盘策略
# 1: 最高安全性(每次提交刷盘)
# 2: 折中(每秒刷盘)
# 0: 最高性能(每秒刷盘,崩溃可能丢失1秒数据)
innodb_flush_log_at_trx_commit = 1

# 刷盘方法(Linux推荐O_DIRECT)
innodb_flush_method = O_DIRECT

# I/O线程数
innodb_read_io_threads = 8
innodb_write_io_threads = 8

# 预读设置
innodb_random_read_ahead = ON

# 双写缓冲(SSD可考虑关闭)
innodb_doublewrite = ON

连接数与会话管理

连接配置优化:

-- 查看连接相关配置
SHOW VARIABLES LIKE 'max_connections';
SHOW VARIABLES LIKE 'max_user_connections';
SHOW VARIABLES LIKE 'thread_cache_size';
SHOW VARIABLES LIKE 'wait_timeout';
SHOW VARIABLES LIKE 'interactive_timeout';

-- 监控连接状态
SHOW STATUS LIKE 'Threads_%';
SHOW PROCESSLIST;

-- 连接使用分析
SELECT 
    USER,
    HOST,
    DB,
    COMMAND,
    TIME,
    STATE,
    LEFT(INFO, 100) as query_sample
FROM INFORMATION_SCHEMA.PROCESSLIST 
WHERE COMMAND != 'Sleep'
ORDER BY TIME DESC;

连接优化配置:

# my.cnf 连接配置示例
[mysqld]
# 最大连接数
max_connections = 1000

# 用户最大连接数
max_user_connections = 500

# 线程缓存
thread_cache_size = 100

# 超时设置
wait_timeout = 600
interactive_timeout = 600

# 连接限制
max_connect_errors = 100000

# 反向解析(建议关闭提升性能)
skip_name_resolve = 1

复制参数配置优化

主从复制优化:

-- 查看复制配置
SHOW VARIABLES LIKE 'binlog_format';
SHOW VARIABLES LIKE 'sync_binlog';
SHOW VARIABLES LIKE 'innodb_flush_log_at_trx_commit';

-- 监控复制状态
SHOW SLAVE STATUS\G

-- 复制性能参数
SHOW VARIABLES LIKE 'slave_parallel_workers';
SHOW VARIABLES LIKE 'slave_parallel_type';

复制优化配置:

# my.cnf 复制配置示例
[mysqld]
# 二进制日志格式
binlog_format = ROW

# 二进制日志同步
sync_binlog = 1

# 从库并行复制
slave_parallel_workers = 8
slave_parallel_type = LOGICAL_CLOCK

# 复制延迟控制
slave_preserve_commit_order = 1

# 二进制日志保留
expire_logs_days = 7
binlog_expire_logs_seconds = 604800

监控指标与调优依据

关键性能指标监控:

-- 创建性能监控表
CREATE TABLE performance_metrics (
    metric_id INT AUTO_INCREMENT PRIMARY KEY,
    metric_name VARCHAR(100) NOT NULL,
    metric_value DECIMAL(20,4),
    collected_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    notes TEXT
);

-- 收集关键指标
INSERT INTO performance_metrics (metric_name, metric_value)
SELECT 'qps', VARIABLE_VALUE
FROM information_schema.GLOBAL_STATUS 
WHERE VARIABLE_NAME = 'Queries';

INSERT INTO performance_metrics (metric_name, metric_value)
SELECT 'tps', VARIABLE_VALUE
FROM information_schema.GLOBAL_STATUS 
WHERE VARIABLE_NAME = 'Com_commit';

-- 缓冲池命中率
INSERT INTO performance_metrics (metric_name, metric_value)
SELECT 'buffer_pool_hit_rate', 
    (1 - ( 
        SELECT VARIABLE_VALUE 
        FROM information_schema.GLOBAL_STATUS 
        WHERE VARIABLE_NAME = 'Innodb_buffer_pool_reads'
    ) / (
        SELECT VARIABLE_VALUE 
        FROM information_schema.GLOBAL_STATUS 
        WHERE VARIABLE_NAME = 'Innodb_buffer_pool_read_requests'
    )) * 100;

-- 连接使用率
INSERT INTO performance_metrics (metric_name, metric_value)
SELECT 'connection_usage_rate',
    ( 
        SELECT VARIABLE_VALUE 
        FROM information_schema.GLOBAL_STATUS 
        WHERE VARIABLE_NAME = 'Threads_connected'
    ) / (
        SELECT VARIABLE_VALUE 
        FROM information_schema.GLOBAL_VARIABLES 
        WHERE VARIABLE_NAME = 'max_connections'
    ) * 100;

性能调优检查清单:

-- 系统健康检查查询
SELECT 
    '连接数' as metric,
    CONCAT(
        (SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME = 'Threads_connected'),
        '/',
        (SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_VARIABLES WHERE VARIABLE_NAME = 'max_connections')
    ) as value
UNION ALL
SELECT 
    '缓冲池命中率',
    CONCAT(
        ROUND((1 - ( 
            SELECT VARIABLE_VALUE 
            FROM information_schema.GLOBAL_STATUS 
            WHERE VARIABLE_NAME = 'Innodb_buffer_pool_reads'
        ) / (
            SELECT VARIABLE_VALUE 
            FROM information_schema.GLOBAL_STATUS 
            WHERE VARIABLE_NAME = 'Innodb_buffer_pool_read_requests'
        )) * 100, 2),
        '%'
    )
UNION ALL
SELECT 
    '临时表磁盘使用率',
    CONCAT(
        ROUND((
            SELECT VARIABLE_VALUE 
            FROM information_schema.GLOBAL_STATUS 
            WHERE VARIABLE_NAME = 'Created_tmp_disk_tables'
        ) / NULLIF(
            SELECT VARIABLE_VALUE 
            FROM information_schema.GLOBAL_STATUS 
            WHERE VARIABLE_NAME = 'Created_tmp_tables'
        , 0) * 100, 2),
        '%'
    )
UNION ALL
SELECT 
    '慢查询比例',
    CONCAT(
        ROUND((
            SELECT VARIABLE_VALUE 
            FROM information_schema.GLOBAL_STATUS 
            WHERE VARIABLE_NAME = 'Slow_queries'
        ) / NULLIF(
            SELECT VARIABLE_VALUE 
            FROM information_schema.GLOBAL_STATUS 
            WHERE VARIABLE_NAME = 'Questions'
        , 0) * 100, 4),
        '%'
    );

总结

通过本篇的深入学习,我们掌握了MySQL性能优化的核心技能:

  1. 查询优化:深入理解执行计划,掌握慢查询分析和查询重写技巧
  2. 索引优化:合理设计索引,避免索引失效,掌握索引维护策略
  3. 服务器调优:优化内存、I/O、连接等关键参数配置
  4. 监控体系:建立完善的性能监控和报警机制

关键优化原则:

  • 测量优先:基于实际监控数据进行优化
  • 渐进优化:每次只调整一个参数,观察效果
  • 平衡考虑:在性能、安全、可靠性之间找到平衡点
  • 预防为主:建立常规维护和监控机制

性能优化层次:

  1. SQL和索引优化:效果最明显,成本最低
  2. 数据库配置优化:需要深入了解MySQL内部机制
  3. 架构优化:读写分离、分库分表等
  4. 硬件优化:SSD、更多内存、更好CPU

动手练习:

  1. 分析你当前系统的慢查询,并实施优化
  2. 检查索引使用情况,删除无用索引,添加必要索引
  3. 根据服务器配置调整MySQL参数
  4. 建立性能监控体系,定期收集关键指标
  5. 实施定期的数据库维护操作

欢迎在评论区分享你的性能优化经验和遇到的问题!

  • 本文作者: 忘忧
  • 本文链接: /archives/2951
  • 版权声明: 本博客所有文章除特别声明外,均采用CC BY-NC-SA 3.0 许可协议。转载请注明出处!
# MySQL
MySql入门:MySQL数据类型与表设计
MySql入门:SQL编程与高级特性
  • 文章目录
  • 站点概览
忘忧

忘忧

君子藏器于身,待时而动,何不利之有

59 日志
9 分类
67 标签
RSS
Github E-mail StackOverflow
Creative Commons
0%
© 2025 忘忧
由 Halo 强力驱动