<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
    <channel>
            <title>忘忧的小站</title>
            <link>https://wangyou233.wang</link>
                <description>君子藏器于身，待时而动，何不利之有</description>
        <generator>Halo 1.6.0</generator>
        <lastBuildDate>Fri, 14 Nov 2025 05:47:22 EST</lastBuildDate>
                <item>
                    <title>
                        <![CDATA[SqlServer高频面试题(持续更新251114)]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2954</link>
                    <description>
                            <![CDATA[<h1 id="%E5%9F%BA%E7%A1%80%E9%A2%98" tabindex="-1">基础题</h1><h2 id="1.-%E4%B8%BB%E9%94%AE%E3%80%81%E5%A4%96%E9%94%AE%E3%80%81%E8%B6%85%E9%94%AE%E3%80%81%E5%80%99%E9%80%89%E9%94%AE%E7%9A%84%E5%8C%BA%E5%88%AB%E5%92%8C%E7%94%A8%E9%80%94" tabindex="-1">1. 主键、外键、超键、候选键的区别和用途</h2><ul><li><strong>超键</strong>：在关系中能唯一标识元组的属性集称为关系模式的超键。一个属性可以作为一个超键，多个属性组合在一起也可以作为一个超键。超键包含候选键和主键。</li><li><strong>候选键</strong>：是最小超键，即没有冗余元素的超键。</li><li><strong>主键</strong>：数据库表中对储存数据对象予以唯一和完整标识的数据列或属性的组合。一个数据列只能有一个主键，且主键的取值不能缺失，即不能为空值（Null）。</li><li><strong>外键</strong>：在一个表中存在的另一个表的主键称此表的外键。</li></ul><h2 id="2.-%E4%B8%BA%E4%BB%80%E4%B9%88%E4%BD%BF%E7%94%A8%E8%87%AA%E5%A2%9E%E5%88%97%E4%BD%9C%E4%B8%BA%E4%B8%BB%E9%94%AE%EF%BC%9F" tabindex="-1">2. 为什么使用自增列作为主键？</h2><p>自增列作为主键可以简化数据的插入操作，避免因插入非顺序的主键值导致的索引分裂和碎片化，从而提高数据库性能。自增列也易于分配和管理，且不会与其他记录的主键冲突。</p><h2 id="3.-%E8%A7%A6%E5%8F%91%E5%99%A8%E7%9A%84%E4%BD%9C%E7%94%A8%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">3. 触发器的作用是什么？</h2><p>触发器是一种特殊的存储过程，它在特定数据库操作（如INSERT、UPDATE、DELETE）执行之前或之后自动触发执行。触发器可以用于维护数据完整性、实施复杂的业务规则、自动更新表中的数据等。</p><h2 id="4.-%E4%BB%80%E4%B9%88%E6%98%AF%E5%AD%98%E5%82%A8%E8%BF%87%E7%A8%8B%EF%BC%9F%E4%BD%BF%E7%94%A8%E4%BB%80%E4%B9%88%E6%9D%A5%E8%B0%83%E7%94%A8%EF%BC%9F" tabindex="-1">4. 什么是存储过程？使用什么来调用？</h2><p>存储过程是一组为了执行特定任务而预编译的SQL语句。它们可以提高性能，因为只需编译一次，之后可以重复调用。存储过程可以通过SQL命令直接调用，也可以被应用程序通过特定的API调用来执行。</p><h2 id="5.-%E5%AD%98%E5%82%A8%E8%BF%87%E7%A8%8B%E7%9A%84%E4%BC%98%E7%BC%BA%E7%82%B9%E6%9C%89%E5%93%AA%E4%BA%9B%EF%BC%9F" tabindex="-1">5. 存储过程的优缺点有哪些？</h2><p>存储过程的优点包括提高性能（预编译）、减少网络传输、增强安全性（需要特定权限才能执行）、便于代码复用。缺点包括移植性差，因为它们通常与特定的数据库系统紧密相关。</p><h2 id="6.-%E5%AD%98%E5%82%A8%E8%BF%87%E7%A8%8B%E4%B8%8E%E5%87%BD%E6%95%B0%E7%9A%84%E5%8C%BA%E5%88%AB%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">6. 存储过程与函数的区别是什么？</h2><p>存储过程是一系列为了完成特定功能的SQL语句集合，可以通过参数传递数据，并且可以有多个返回值。函数通常返回一个单一的数据值，并且在使用时作为表达式的一部分。存储过程使用更灵活，而函数则更适用于需要返回特定数据结构的场景。</p><h2 id="7.-%E8%A7%86%E5%9B%BE%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F%E6%B8%B8%E6%A0%87%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">7. 视图是什么？游标是什么？</h2><p>视图是基于SQL查询的虚拟表，它像实际的表一样可以进行查询和更新操作，但是不存储数据，而是在查询视图时动态生成结果。游标是一种数据库对象，用于逐行处理查询结果集，常用于需要对结果集进行循环处理的场景。</p><h2 id="8.-%E8%A7%86%E5%9B%BE%E7%9A%84%E4%BC%98%E7%BC%BA%E7%82%B9%E6%9C%89%E5%93%AA%E4%BA%9B%EF%BC%9F" tabindex="-1">8. 视图的优缺点有哪些？</h2><p>视图的优点包括简化复杂的查询、提高数据安全性、实现数据逻辑抽象。缺点包括可能影响性能（尤其是在复杂的视图上执行查询时），以及在某些情况下限制了数据的更新操作。</p><h2 id="9.-%E4%BB%80%E4%B9%88%E6%98%AF%E4%B8%B4%E6%97%B6%E8%A1%A8%EF%BC%9F%E4%B8%B4%E6%97%B6%E8%A1%A8%E4%BB%80%E4%B9%88%E6%97%B6%E5%80%99%E5%88%A0%E9%99%A4%EF%BC%9F" tabindex="-1">9. 什么是临时表？临时表什么时候删除？</h2><p>临时表是在当前会话或事务中创建的表，仅对当前会话可见。当会话结束或事务提交时，临时表及其数据会自动删除。</p><h2 id="10.-%E9%9D%9E%E5%85%B3%E7%B3%BB%E5%9E%8B%E6%95%B0%E6%8D%AE%E5%BA%93%E5%92%8C%E5%85%B3%E7%B3%BB%E5%9E%8B%E6%95%B0%E6%8D%AE%E5%BA%93%E7%9A%84%E5%8C%BA%E5%88%AB%E5%92%8C%E4%BC%98%E5%8A%BF%E6%AF%94%E8%BE%83%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">10. 非关系型数据库和关系型数据库的区别和优势比较是什么？</h2><p>非关系型数据库（NoSQL）和关系型数据库在数据模型、查询方式、扩展性等方面有本质区别。非关系型数据库通常提供更高的扩展性和灵活性，适合处理大规模分布式数据。关系型数据库则在数据一致性、复杂查询和事务管理方面表现更好。</p><h2 id="11.-%E6%95%B0%E6%8D%AE%E5%BA%93%E8%8C%83%E5%BC%8F%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F%E5%A6%82%E4%BD%95%E6%A0%B9%E6%8D%AE%E6%9F%90%E4%B8%AA%E5%9C%BA%E6%99%AF%E8%AE%BE%E8%AE%A1%E6%95%B0%E6%8D%AE%E8%A1%A8%EF%BC%9F" tabindex="-1">11. 数据库范式是什么？如何根据某个场景设计数据表？</h2><p>数据库范式是一套用于指导数据库设计的规范，包括第一范式（1NF）、第二范式（2NF）、第三范式（3NF）等，目的是减少数据冗余和提高数据完整性。设计数据表时，应根据业务需求和数据关系来确定表结构，确保满足相应的范式要求。</p><h2 id="12.-%E5%86%85%E8%BF%9E%E6%8E%A5%E3%80%81%E5%A4%96%E8%BF%9E%E6%8E%A5%E3%80%81%E4%BA%A4%E5%8F%89%E8%BF%9E%E6%8E%A5%E3%80%81%E7%AC%9B%E5%8D%A1%E5%B0%94%E7%A7%AF%E7%AD%89%E7%9A%84%E5%8C%BA%E5%88%AB%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">12. 内连接、外连接、交叉连接、笛卡尔积等的区别是什么？</h2><p>内连接只返回两个表中匹配的行；外连接（左外连接、右外连接）会返回一个表的全部行，另一个表中匹配的行，不匹配的行用NULL填充；交叉连接返回两个表的笛卡尔积，即每行与另一个表中每行的组合；笛卡尔积是两个集合所有可能的组合。</p><h2 id="13.-varchar%E5%92%8Cchar%E7%9A%84%E4%BD%BF%E7%94%A8%E5%9C%BA%E6%99%AF%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">13. varchar和char的使用场景是什么？</h2><p>VARCHAR适用于长度可变的数据，如用户输入的评论或描述，因为它可以根据实际内容长度存储，节省空间。CHAR适用于长度固定的数据，如性别或国家代码，因为它可以提供更快的存取速度，但会使用固定长度的存储空间。</p><h2 id="14.-sql%E8%AF%AD%E8%A8%80%E5%88%86%E7%B1%BB%E6%9C%89%E5%93%AA%E4%BA%9B%EF%BC%9F" tabindex="-1">14. SQL语言分类有哪些？</h2><p>SQL语言主要分为数据查询语言（DQL），数据操纵语言（DML），数据定义语言（DDL）和数据控制语言（DCL）。DQL用于查询数据，如SELECT；DML用于数据的增删改，如INSERT、UPDATE、DELETE；DDL用于数据库对象的定义，如CREATE、ALTER、DROP；DCL用于控制数据库访问权限，如GRANT、REVOKE。</p><h2 id="15.-like-'%25xxx%25'%E5%92%8C%E2%80%99xxx%25'%E7%9A%84%E5%8C%BA%E5%88%AB%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">15. like '%xxx%'和’xxx%'的区别是什么？</h2><p>LIKE '%xxx%'表示匹配包含xxx的任意字符串，无论xxx出现在哪一部分。LIKE 'xxx%'表示匹配以xxx结尾的字符串。两者在模糊匹配时使用不同的通配符，%代表任意字符出现任意次数，而_仅代表单个字符。</p><h2 id="16.-count(*)%E3%80%81count(1)%E3%80%81count(column)%E7%9A%84%E5%8C%BA%E5%88%AB%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">16. count(*)、count(1)、count(column)的区别是什么？</h2><p>COUNT()用于计算表中的总行数，包括NULL值。COUNT(1)是COUNT()的等价操作，用于计算行数。COUNT(column)用于计算特定列中非NULL值的数量。</p><h2 id="17.-%E6%9C%80%E5%B7%A6%E5%89%8D%E7%BC%80%E5%8E%9F%E5%88%99%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">17. 最左前缀原则是什么？</h2><p>最左前缀原则是索引创建和使用的一个重要原则，它指的是在多列索引中，数据库查询优化器只会使用索引的最左部分列。这意味着如果查询条件没有使用到索引的第一个列，那么即使后面的列被使用到，索引也可能不会被利用。</p><h2 id="18.-%E7%B4%A2%E5%BC%95%E7%9A%84%E4%BD%9C%E7%94%A8%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F%E5%AE%83%E7%9A%84%E4%BC%98%E7%82%B9%E5%92%8C%E7%BC%BA%E7%82%B9%E6%9C%89%E5%93%AA%E4%BA%9B%EF%BC%9F" tabindex="-1">18. 索引的作用是什么？它的优点和缺点有哪些？</h2><p>索引的作用是加快数据检索速度，排序和分组数据，以及保证数据的唯一性。优点包括提高查询速度、加速表连接、支持数据的排序和分组。缺点包括增加存储空间、降低数据更新（INSERT、UPDATE、DELETE）的速度，以及维护索引本身需要额外的开销。</p><h2 id="19.-%E4%BB%80%E4%B9%88%E6%A0%B7%E7%9A%84%E5%AD%97%E6%AE%B5%E9%80%82%E5%90%88%E5%BB%BA%E7%B4%A2%E5%BC%95%EF%BC%9F" tabindex="-1">19. 什么样的字段适合建索引？</h2><p>适合建索引的字段包括经常需要搜索的列、作为主键的列、经常用于连接的列、经常需要进行范围搜索的列、经常需要排序的列，以及经常使用在WHERE子句中的列。</p><h2 id="20.-%E8%81%9A%E9%9B%86%E7%B4%A2%E5%BC%95%E5%92%8C%E9%9D%9E%E8%81%9A%E9%9B%86%E7%B4%A2%E5%BC%95%E7%9A%84%E5%8C%BA%E5%88%AB%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">20. 聚集索引和非聚集索引的区别是什么？</h2><p>聚集索引决定了表中数据的物理存储顺序，使得相关列的数据在物理上连续存放，查询效率较高，但修改数据时可能较慢。非聚集索引指定了表中数据的逻辑顺序，但物理存储顺序与索引可能不一致，通常用于频繁更新的数据列。</p><h2 id="21.-sql%E6%B3%A8%E5%85%A5%E5%BC%8F%E6%94%BB%E5%87%BB%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">21. SQL注入式攻击是什么？</h2><p>SQL注入式攻击是一种网络安全攻击手段，攻击者通过在Web表单输入域或页面请求的查询字符串中插入恶意SQL命令，欺骗服务器执行这些命令，从而获取、篡改或删除数据库中的数据。</p><h2 id="22.-%E5%A6%82%E4%BD%95%E9%98%B2%E8%8C%83sql%E6%B3%A8%E5%85%A5%E5%BC%8F%E6%94%BB%E5%87%BB%EF%BC%9F" tabindex="-1">22. 如何防范SQL注入式攻击？</h2><p>防范SQL注入式攻击的方法包括：对用户输入进行过滤和验证，替换或转义特殊字符；使用预处理语句（参数化查询）；限制数据库权限，使用最小权限原则；使用存储过程；以及在服务器端进行输入验证等。</p><h2 id="23.-%E5%86%85%E5%AD%98%E6%B3%84%E6%BC%8F%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">23. 内存泄漏是什么？</h2><p>内存泄漏是指在程序运行过程中，由于未能适当释放不再使用的内存，导致随着程序的持续运行，可用内存逐渐减少的现象。在动态内存分配的语言中，如C或C++，如果使用new分配了内存，却忘记使用delete释放，就可能发生内存泄漏。</p><h2 id="24.-%E7%BB%B4%E6%8A%A4%E6%95%B0%E6%8D%AE%E5%BA%93%E7%9A%84%E5%AE%8C%E6%95%B4%E6%80%A7%E5%92%8C%E4%B8%80%E8%87%B4%E6%80%A7%EF%BC%8C%E4%BD%BF%E7%94%A8%E8%A7%A6%E5%8F%91%E5%99%A8%E8%BF%98%E6%98%AF%E8%87%AA%E5%86%99%E4%B8%9A%E5%8A%A1%E9%80%BB%E8%BE%91%EF%BC%9F" tabindex="-1">24. 维护数据库的完整性和一致性，使用触发器还是自写业务逻辑？</h2><p>维护数据库的完整性和一致性，通常首选使用数据库提供的约束，如CHECK、PRIMARY KEY、FOREIGN KEY等。其次是使用触发器，因为它们可以自动执行，确保数据的完整性和一致性，无论哪种业务逻辑访问数据库。最后考虑自写业务逻辑，但这种方法编程复杂，效率较低。</p><h2 id="25.-%E4%BB%80%E4%B9%88%E6%98%AF%E4%BA%8B%E5%8A%A1%EF%BC%9F%E4%BB%80%E4%B9%88%E6%98%AF%E9%94%81%EF%BC%9F" tabindex="-1">25. 什么是事务？什么是锁？</h2><p>事务是一系列操作，它们作为一个整体被执行，以确保数据的完整性。如果事务中的任何操作失败，整个事务将回滚到执行前的状态。锁是数据库管理系统用来保证事务的隔离性和并发控制的一种机制，它可以防止多个事务同时修改同一数据，从而避免数据冲突。</p><h2 id="26.-%E8%BF%87%E5%A4%9A%E7%B4%A2%E5%BC%95%E5%AF%B9%E6%95%B0%E6%8D%AE%E5%BA%93%E6%80%A7%E8%83%BD%E7%9A%84%E5%BD%B1%E5%93%8D" tabindex="-1">26. 过多索引对数据库性能的影响</h2><p>过多的索引虽然可以提高查询速度，但在数据的插入、更新和删除操作时，数据库引擎需要更多的时间来维护这些索引，这可能会导致性能下降。因此，需要在索引创建时进行权衡，以确保数据库操作的整体性能。</p><h2 id="27.-%E7%9B%B8%E5%85%B3%E5%AD%90%E6%9F%A5%E8%AF%A2%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F%E5%A6%82%E4%BD%95%E4%BD%BF%E7%94%A8%E8%BF%99%E4%BA%9B%E6%9F%A5%E8%AF%A2%EF%BC%9F" tabindex="-1">27. 相关子查询是什么？如何使用这些查询？</h2><p>相关子查询是一种特殊类型的子查询，它在查询中使用外部查询的值。这种子查询通常用于WHERE或HAVING子句中，可以基于外部查询的结果来动态地定义查询条件。</p><h2 id="28.-%E6%93%8D%E4%BD%9C%E4%BC%9A%E4%BD%BF%E2%BD%A4%E5%88%B0tempdb" tabindex="-1">28. 操作会使⽤到TempDB</h2><p>TempDB是SQL Server的一个系统数据库，用于存储临时数据，如临时表和表变量。许多操作，包括创建表时的临时数据、执行某些类型的JOIN操作、使用游标以及存储过程和批处理中的一些操作，都可能会用到TempDB。</p><h2 id="29.-%E5%A6%82%E6%9E%9Ctempdb%E5%BC%82%E5%B8%B8%E5%8F%98%E5%A4%A7%EF%BC%8C%E5%8F%AF%E8%83%BD%E7%9A%84%E5%8E%9F%E5%9B%A0%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%8C%E8%AF%A5%E5%A6%82%E4%BD%95%E5%A4%84%E7%90%86%EF%BC%9F" tabindex="-1">29. 如果TempDB异常变大，可能的原因是什么，该如何处理？</h2><p>TempDB异常变大可能是由于大量使用临时表或返回的记录集过大造成的。处理方法包括优化查询以减少返回的数据量，使用分批处理，或者调整TempDB的大小和配置。</p><h2 id="30.-index%E6%9C%89%E5%93%AA%E4%BA%9B%E7%B1%BB%E5%9E%8B%EF%BC%8C%E5%AE%83%E4%BB%AC%E7%9A%84%E5%8C%BA%E5%88%AB%E5%92%8C%E5%AE%9E%E7%8E%B0%E5%8E%9F%E7%90%86%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%8C%E7%B4%A2%E5%BC%95%E6%9C%89%E4%BB%80%E4%B9%88%E4%BC%98%E7%82%B9%E5%92%8C%E7%BC%BA%E7%82%B9" tabindex="-1">30. Index有哪些类型，它们的区别和实现原理是什么，索引有什么优点和缺点</h2><p>索引类型主要包括聚集索引和非聚集索引。聚集索引决定了表中数据的物理存储顺序，非聚集索引则不改变数据的物理存储顺序。索引的优点包括提高查询速度、确保数据的唯一性和排序。缺点是增加了存储空间和维护成本，降低了数据更新的速度。</p><h2 id="31.-job%E4%BF%A1%E6%81%AF%E5%8F%AF%E4%BB%A5%E9%80%9A%E8%BF%87%E5%93%AA%E4%BA%9B%E8%A1%A8%E8%8E%B7%E5%8F%96%EF%BC%9B%E7%B3%BB%E7%BB%9F%E6%AD%A3%E5%9C%A8%E8%BF%90%E8%A1%8C%E7%9A%84%E8%AF%AD%E5%8F%A5%E5%8F%AF%E4%BB%A5%E9%80%9A%E8%BF%87%E5%93%AA%E4%BA%9B%E8%A7%86%E5%9B%BE%E8%8E%B7%E5%8F%96%EF%BC%9B%E5%A6%82%E4%BD%95%E8%8E%B7%E5%8F%96%E6%9F%90%E4%B8%AAt-sql%E8%AF%AD%E5%8F%A5%E7%9A%84io%E3%80%81time%E7%AD%89%E4%BF%A1%E6%81%AF" tabindex="-1">31. Job信息可以通过哪些表获取；系统正在运行的语句可以通过哪些视图获取；如何获取某个T-SQL语句的IO、Time等信息</h2><p>Job信息可以通过SQL Server的msdb数据库中的表，如sysjobs和sysjobhistory获取。系统正在运行的语句可以通过动态管理视图如sys.dm_exec_requests获取。要获取某个T-SQL语句的IO和Time等信息，可以使用SQL Server Profiler或相关的动态管理视图。</p><p>确保字段只接受特定范围内的值<br />可以通过在字段上设置CHECK约束来确保只接受特定范围内的值。CHECK约束允许定义字段值的范围或条件，确保插入或更新数据时满足这些条件。</p><h2 id="32.-char%E3%80%81varchar%E3%80%81nchar-%E5%92%8C-nvarchar-%E7%9A%84%E5%8C%BA%E5%88%AB%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">32. CHAR、VARCHAR、NCHAR 和 NVARCHAR 的区别是什么？</h2><ul><li><strong>CHAR(n)：</strong> 固定长度，非 Unicode 字符数据。无论实际内容多长，它都会占用 <code>n</code> 个字节的存储空间。适合存储长度相对固定的数据（如身份证号、电话号码）。</li><li><strong>VARCHAR(n)：</strong> 可变长度，非 Unicode 字符数据。它只占用实际数据长度 + 2 个字节（用于存储长度信息）的存储空间。适合存储长度变化较大的数据。</li><li><strong>NCHAR(n)：</strong> 固定长度，Unicode 字符数据。存储 Unicode 字符（如中文、日文等），每个字符占用 2 个字节。长度为 <code>n</code>，表示最多可存储 <code>n</code> 个字符（无论中英文）。</li><li><strong>NVARCHAR(n)：</strong> 可变长度，Unicode 字符数据。同样存储 Unicode 字符，每个字符 2 字节，但只占用（实际字符数 * 2） + 2 字节的空间。</li></ul><p><strong>核心区别：</strong> <code>CHAR/VARCHAR</code> 用于非 Unicode，一个英文字符占1字节，一个中文字符可能占2字节（取决于编码）。<code>NCHAR/NVARCHAR</code> 用于 Unicode，任何字符都占2字节，能全球通用。</p><h2 id="33.-truncate%E3%80%81delete-%E5%92%8C-drop-%E7%9A%84%E5%8C%BA%E5%88%AB%EF%BC%9F" tabindex="-1">33. TRUNCATE、DELETE 和 DROP 的区别？</h2><table><thead><tr><th style="text-align:left">特性</th><th style="text-align:left">DELETE</th><th style="text-align:left">TRUNCATE</th><th style="text-align:left">DROP</th></tr></thead><tbody><tr><td style="text-align:left"><strong>类型</strong></td><td style="text-align:left">DML（数据操作语言）</td><td style="text-align:left">DDL（数据定义语言）</td><td style="text-align:left">DDL（数据定义语言）</td></tr><tr><td style="text-align:left"><strong>条件</strong></td><td style="text-align:left">可以带 WHERE 子句</td><td style="text-align:left">不能带条件，清空所有数据</td><td style="text-align:left">删除整个表（结构和数据）</td></tr><tr><td style="text-align:left"><strong>事务</strong></td><td style="text-align:left">操作会被记录在事务日志中，可回滚</td><td style="text-align:left">操作记录最少，<strong>不可回滚</strong></td><td style="text-align:left">操作不可回滚</td></tr><tr><td style="text-align:left"><strong>触发器</strong></td><td style="text-align:left">会触发 DELETE 触发器</td><td style="text-align:left">不会触发触发器</td><td style="text-align:left">-</td></tr><tr><td style="text-align:left"><strong>标识列</strong></td><td style="text-align:left">不影响标识列的当前值</td><td style="text-align:left">重置标识列的种子值</td><td style="text-align:left">-</td></tr><tr><td style="text-align:left"><strong>性能</strong></td><td style="text-align:left">较慢（逐行删除并记录日志）</td><td style="text-align:left">非常快（直接释放数据页）</td><td style="text-align:left">快</td></tr><tr><td style="text-align:left"><strong>锁</strong></td><td style="text-align:left">行级锁</td><td style="text-align:left">表锁</td><td style="text-align:left">表锁</td></tr></tbody></table><h2 id="34.-%E4%BB%80%E4%B9%88%E6%98%AF%E7%B4%A2%E5%BC%95%EF%BC%9F%E8%81%9A%E9%9B%86%E7%B4%A2%E5%BC%95%E5%92%8C%E9%9D%9E%E8%81%9A%E9%9B%86%E7%B4%A2%E5%BC%95%E7%9A%84%E5%8C%BA%E5%88%AB%EF%BC%9F" tabindex="-1">34. 什么是索引？聚集索引和非聚集索引的区别？</h2><ul><li><strong>索引</strong>：相当于书籍的目录，它能帮助数据库引擎快速找到数据，而无需扫描整个表。</li><li><strong>聚集索引</strong>：<ul><li>决定了表中数据的<strong>物理存储顺序</strong>。一张表<strong>只能有一个</strong>聚集索引。</li><li>叶子节点存储的是<strong>实际的数据行</strong>。</li><li>例如，在主键上默认创建的通常是聚集索引。</li></ul></li><li><strong>非聚集索引</strong>：<ul><li>不影响数据的物理存储顺序。一张表可以有<strong>多个</strong>非聚集索引。</li><li>叶子节点存储的是<strong>索引键值 + 指向数据行的指针（聚集索引键或RID）</strong>。</li><li>查询时需要先查非聚集索引，再通过指针去查找实际数据，这个过程称为 <strong>“键查找”</strong> 或 <strong>“书签查找”</strong>。</li></ul></li></ul><h2 id="35.-%E5%86%85%E8%BF%9E%E6%8E%A5%EF%BC%88inner-join%EF%BC%89%E5%92%8C%E5%A4%96%E8%BF%9E%E6%8E%A5%EF%BC%88outer-join%EF%BC%89%E7%9A%84%E5%8C%BA%E5%88%AB%EF%BC%9F" tabindex="-1">35. 内连接（INNER JOIN）和外连接（OUTER JOIN）的区别？</h2><ul><li><strong>内连接</strong>：返回两个表中<strong>连接条件匹配</strong>的所有行。不匹配的行不会出现在结果中。</li><li><strong>外连接</strong>：<ul><li><strong>左外连接（LEFT JOIN）</strong>：返回左表的所有行，以及右表中连接条件匹配的行。如果右表无匹配，则右表部分为 NULL。</li><li><strong>右外连接（RIGHT JOIN）</strong>：返回右表的所有行，以及左表中连接条件匹配的行。如果左表无匹配，则左表部分为 NULL。</li><li><strong>全外连接（FULL JOIN）</strong>：返回左表和右表中的所有行。当某一行在另一个表中没有匹配时，另一个表的部分为 NULL。</li></ul></li></ul><h2 id="36.-%E4%BB%80%E4%B9%88%E6%98%AF%E6%89%A7%E8%A1%8C%E8%AE%A1%E5%88%92%EF%BC%9F%E5%A6%82%E4%BD%95%E6%9F%A5%E7%9C%8B%E5%92%8C%E5%88%86%E6%9E%90%EF%BC%9F" tabindex="-1">36. 什么是执行计划？如何查看和分析？</h2><ul><li><strong>执行计划</strong>：是 SQL Server 查询优化器生成的、关于如何执行一个查询的“路线图”。它显示了数据获取的步骤、使用的索引、连接类型、数据量估计和成本等。</li><li><strong>查看方法</strong>：<ul><li>在 SSMS 中，在查询前按下 <code>Ctrl + M</code>（显示实际执行计划）或 <code>Ctrl + L</code>（显示估计执行计划），然后执行查询。</li><li>使用 SET 语句：<code>SET SHOWPLAN_TEXT ON</code> 或 <code>SET STATISTICS PROFILE ON</code>。</li></ul></li><li><strong>分析要点</strong>：<ul><li><strong>高成本操作</strong>：找到成本最高的步骤。</li><li><strong>表扫描（Table Scan）</strong>：警惕！这通常意味着没有合适的索引。</li><li><strong>索引扫描（Index Scan） vs 索引查找（Index Seek）</strong>： Seek 效率远高于 Scan。Scan 意味着遍历了整个索引。</li><li><strong>键查找（Key Lookup）</strong>：如果开销很大，考虑创建覆盖索引。</li><li><strong>警告标志</strong>：如转换警告（隐式类型转换）等。</li></ul></li></ul><h2 id="37.-%E4%BB%80%E4%B9%88%E6%98%AF%E8%A6%86%E7%9B%96%E7%B4%A2%E5%BC%95%EF%BC%9F" tabindex="-1">37. 什么是覆盖索引？</h2><p>一个<strong>覆盖索引</strong>是指一个<strong>非聚集索引</strong>，它包含了查询中需要的所有字段。当查询的所有列都包含在索引的键或包含列中时，引擎可以直接从索引页中获取数据，而无需再去查找数据页，从而避免昂贵的键查找操作，极大提升性能。</p><p><strong>创建覆盖索引示例：</strong></p><pre><code class="language-sql">CREATE INDEX IX_Covering ON Orders (CustomerID) INCLUDE (OrderDate, TotalAmount);-- 对于查询： SELECT OrderDate, TotalAmount FROM Orders WHERE CustomerID = @ID-- 这个索引就是覆盖索引。</code></pre><h2 id="38.-%E4%BB%80%E4%B9%88%E6%98%AF%E7%B4%A2%E5%BC%95%E7%A2%8E%E7%89%87%EF%BC%9F%E5%A6%82%E4%BD%95%E7%BB%B4%E6%8A%A4%EF%BC%9F" tabindex="-1">38. 什么是索引碎片？如何维护？</h2><ul><li><strong>索引碎片</strong>：当索引页的逻辑顺序与物理顺序不匹配，或者页的数据填充度很低时，就产生了碎片。碎片会导致更多的物理 I/O，降低查询性能。</li><li><strong>类型</strong>：<ul><li><strong>外部碎片</strong>：页的逻辑顺序与物理顺序不符。</li><li><strong>内部碎片</strong>：页中存在大量空闲空间。</li></ul></li><li><strong>维护方法</strong>：<ul><li><strong>重组（REORGANIZE）</strong>：对叶级页以物理方式重新排序，并压缩索引页。是<strong>在线操作</strong>，干扰小。适用于轻度碎片。</li><li><strong>重建（REBUILD）</strong>：删除旧索引并创建一个新的索引。可以最大限度地减少碎片，是<strong>离线操作</strong>（在 Enterprise 版中可以在线）。适用于重度碎片。</li></ul></li></ul><h2 id="39.-%E4%BB%80%E4%B9%88%E6%97%B6%E5%80%99%E4%B8%8D%E9%80%82%E5%90%88%E5%88%9B%E5%BB%BA%E7%B4%A2%E5%BC%95%EF%BC%9F" tabindex="-1">39. 什么时候不适合创建索引？</h2><ul><li>表非常小（数据量很少）。</li><li>列的值重复度很低（如性别列，只有‘男’，‘女’），索引效果不佳。</li><li>列经常被频繁进行 <strong>INSERT/UPDATE/DELETE</strong> 操作，因为维护索引需要开销。</li><li>不会在查询的 WHERE 或 JOIN 条件中使用的列。</li></ul><h2 id="40.-%E8%B0%88%E8%B0%88-acid-%E5%B1%9E%E6%80%A7%E3%80%82" tabindex="-1">40. 谈谈 ACID 属性。</h2><ul><li><strong>原子性（Atomicity）</strong>：事务是一个不可分割的工作单位，事务中的操作要么都发生，要么都不发生。</li><li><strong>一致性（Consistency）</strong>：事务必须使数据库从一个一致性状态变换到另一个一致性状态。</li><li><strong>隔离性（Isolation）</strong>：一个事务的执行不能被其他事务干扰。</li><li><strong>持久性（Durability）</strong>：一旦事务提交，它对数据库中数据的改变就是永久性的。</li></ul><h2 id="41.-sql-server-%E7%9A%84%E9%9A%94%E7%A6%BB%E7%BA%A7%E5%88%AB%E6%9C%89%E5%93%AA%E4%BA%9B%EF%BC%9F%E8%84%8F%E8%AF%BB%E3%80%81%E4%B8%8D%E5%8F%AF%E9%87%8D%E5%A4%8D%E8%AF%BB%E3%80%81%E5%B9%BB%E8%AF%BB%E5%88%86%E5%88%AB%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">41. SQL Server 的隔离级别有哪些？脏读、不可重复读、幻读分别是什么？</h2><ul><li><strong>读未提交（Read Uncommitted）</strong>：可以读取其他事务未提交的数据。会导致<strong>脏读</strong>。</li><li><strong>读已提交（Read Committed）</strong>：只能读取其他事务已提交的数据。这是 SQL Server 的<strong>默认级别</strong>。避免了脏读，但可能导致<strong>不可重复读</strong>。</li><li><strong>可重复读（Repeatable Read）</strong>：保证在同一个事务中，多次读取同一数据的结果是一致的。避免了脏读和不可重复读，但可能导致<strong>幻读</strong>。</li><li><strong>快照（Snapshot）</strong>：在事务开始时提供数据的一个一致性版本。读取的是事务开始时的数据快照，不会阻塞写操作。避免了脏读、不可重复读和幻读。</li><li><strong>可序列化（Serializable）</strong>：最高隔离级别，强制事务串行执行。避免了所有并发问题，但性能最差。</li></ul><p><strong>名词解释：</strong></p><ul><li><strong>脏读</strong>：事务A读取了事务B<strong>未提交</strong>的修改数据，之后B回滚了，A读到的就是脏数据。</li><li><strong>不可重复读</strong>：事务A多次读取同一数据，在此期间事务B<strong>修改并提交</strong>了该数据，导致A多次读取的结果不一致。</li><li><strong>幻读</strong>：事务A多次读取一个范围的数据，在此期间事务B<strong>插入或删除</strong>了该范围内的数据并提交，导致A多次读取时发现“凭空”多出或少了一些行。</li></ul><h2 id="42.-%E4%BB%80%E4%B9%88%E6%98%AF%E6%AD%BB%E9%94%81%EF%BC%9F%E5%A6%82%E4%BD%95%E9%81%BF%E5%85%8D%E5%92%8C%E8%A7%A3%E5%86%B3%EF%BC%9F" tabindex="-1">42. 什么是死锁？如何避免和解决？</h2><ul><li><strong>死锁</strong>：两个或更多事务相互等待对方释放资源，导致它们都无法继续执行的状态。</li><li><strong>避免</strong>：<ul><li>以相同的顺序访问表。</li><li>保持事务简短，尽快提交。</li><li>使用较低的隔离级别（如 Read Committed）。</li><li>使用 <code>LOCK_TIMEOUT</code> 设置。</li></ul></li><li><strong>解决</strong>：SQL Server 内置的死锁监视器会检测到死锁，并选择一个作为“牺牲品”将其回滚，从而让其他事务继续进行。牺牲品事务会收到 1205 错误。</li></ul><hr /><h2 id="43.-%E8%B0%88%E8%B0%88-cte%EF%BC%88%E5%85%AC%E7%94%A8%E8%A1%A8%E8%A1%A8%E8%BE%BE%E5%BC%8F%EF%BC%89%E3%80%81%E4%B8%B4%E6%97%B6%E8%A1%A8%E5%92%8C%E8%A1%A8%E5%8F%98%E9%87%8F%E3%80%82" tabindex="-1">43. 谈谈 CTE（公用表表达式）、临时表和表变量。</h2><ul><li><strong>CTE</strong>：<ul><li>更像一个临时的视图，只在查询期间存在。</li><li>可读性好，特别适合递归查询。</li><li>不能创建索引。</li></ul></li><li><strong>临时表（#Temp）</strong>：<ul><li>存储在 TempDB 中，存在于会话或嵌套作用域中。</li><li>可以创建索引和统计信息。</li><li>适合存储较大的中间结果集。</li></ul></li><li><strong>表变量（@Table）</strong>：<ul><li>也存储在 TempDB 中，存在于批处理/函数/存储过程的作用域中。</li><li>通常认为它更快（对于小数据量），因为它没有统计信息，导致优化器总是假设它只有1行。</li><li>不能创建索引（除了主键和唯一约束）。</li></ul></li></ul><h2 id="44.-%E8%A1%8C%E7%89%88%E6%9C%AC%E6%8E%A7%E5%88%B6%E5%92%8C%E4%B9%90%E8%A7%82%E5%B9%B6%E5%8F%91%E6%8E%A7%E5%88%B6%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">44. 行版本控制和乐观并发控制是什么？</h2><p>这是基于<strong>快照隔离级别</strong>的机制。当数据被修改时，SQL Server 会在 TempDB 中保存被修改行的旧版本。其他正在读取的事务可以从 TempDB 中读取这个旧版本，从而不会与写事务发生阻塞。<code>READ_COMMITTED_SNAPSHOT</code> 和 <code>ALLOW_SNAPSHOT_ISOLATION</code> 数据库选项与此相关。</p><h2 id="45.-sql-server-%E7%9A%84%E9%AB%98%E5%8F%AF%E7%94%A8%E6%80%A7%E6%96%B9%E6%A1%88%E6%9C%89%E5%93%AA%E4%BA%9B%EF%BC%9F" tabindex="-1">45. SQL Server 的高可用性方案有哪些？</h2><ul><li><strong>AlwaysOn 故障转移集群实例（FCI）</strong>：基于 Windows 故障转移集群，共享存储。实例级别的高可用。</li><li><strong>AlwaysOn 可用性组（AG）</strong>：SQL Server 的核心高可用和灾难恢复解决方案。数据库级别，不共享存储，可读副本，功能最强大。</li><li><strong>数据库镜像（已弃用，被AG取代）</strong>：主库和镜像库之间同步数据。</li><li><strong>日志传送</strong>：通过定期备份主数据库的事务日志并还原到辅助服务器来实现。恢复时间较长。</li></ul><h2 id="46.-%E4%BD%A0%E7%9F%A5%E9%81%93%E5%93%AA%E4%BA%9B-sql-server-%E7%9A%84%E6%96%B0%E7%89%B9%E6%80%A7%EF%BC%9F%EF%BC%88%E6%A0%B9%E6%8D%AE%E9%9D%A2%E8%AF%95%E5%85%AC%E5%8F%B8%E4%BD%BF%E7%94%A8%E7%9A%84%E7%89%88%E6%9C%AC%E5%87%86%E5%A4%87%EF%BC%89" tabindex="-1">46. 你知道哪些 SQL Server 的新特性？（根据面试公司使用的版本准备）</h2><ul><li><strong>JSON 支持</strong>：<code>FOR JSON PATH/AUTO</code>, <code>OPENJSON</code> 等。</li><li><strong>STRING_AGG 函数</strong>：将多行字符串值合并成一个字符串。</li><li><strong>查询存储（Query Store）</strong>：用于跟踪查询执行计划、性能历史，并强制特定计划。</li><li><strong>时态表（Temporal Tables）</strong>：自动跟踪和管理数据的历史变化。</li><li><strong>内存优化表（In-Memory OLTP）</strong>：将表和存储过程放入内存，极大提升性能。</li></ul><h2 id="47.-%E5%AD%98%E5%82%A8%E8%BF%87%E7%A8%8B%E5%92%8C%E5%87%BD%E6%95%B0%E7%9A%84%E5%8C%BA%E5%88%AB%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">47. 存储过程和函数的区别是什么？</h2><table><thead><tr><th>特性</th><th>存储过程</th><th>函数</th></tr></thead><tbody><tr><td><strong>返回值</strong></td><td>可以没有返回值，或通过 OUTPUT 参数返回多个值</td><td><strong>必须</strong>有返回值（标量或表）</td></tr><tr><td><strong>使用场景</strong></td><td>执行业务逻辑、数据处理</td><td>计算并返回一个值，或在查询中作为表使用</td></tr><tr><td><strong>在 SELECT 中调用</strong></td><td><strong>不可以</strong></td><td><strong>可以</strong></td></tr><tr><td><strong>DML 操作</strong></td><td>可以对表进行所有 DML 操作</td><td>在函数内部<strong>不能</strong>执行 DML 操作（除了表变量）</td></tr><tr><td><strong>事务管理</strong></td><td>可以在内部使用事务（BEGIN TRANSACTION）</td><td><strong>不能</strong>在函数内使用事务</td></tr><tr><td><strong>执行方式</strong></td><td>EXEC/EXECUTE 过程名</td><td>SELECT dbo.函数名()</td></tr></tbody></table><h2 id="48.-%E4%BB%80%E4%B9%88%E6%97%B6%E5%80%99%E5%BA%94%E8%AF%A5%E4%BD%BF%E7%94%A8%E5%AD%98%E5%82%A8%E8%BF%87%E7%A8%8B%EF%BC%9F%E4%BB%80%E4%B9%88%E6%97%B6%E5%80%99%E5%BA%94%E8%AF%A5%E4%BD%BF%E7%94%A8%E5%87%BD%E6%95%B0%EF%BC%9F**" tabindex="-1">48. 什么时候应该使用存储过程？什么时候应该使用函数？**</h2><ul><li><p><strong>使用存储过程</strong>：</p><ul><li>执行复杂的业务逻辑</li><li>需要返回多个结果集</li><li>需要进行 DML 操作（INSERT/UPDATE/DELETE）</li><li>需要事务控制</li><li>性能要求高（预编译、执行计划重用）</li></ul></li><li><p><strong>使用函数</strong>：</p><ul><li>封装可重用的计算逻辑</li><li>在查询中作为列使用</li><li>简化复杂的 JOIN 或 WHERE 条件</li><li>返回表值供 FROM 子句使用</li></ul></li></ul><hr /><h2 id="49.-%E4%BB%80%E4%B9%88%E6%98%AF%E8%A7%A6%E5%8F%91%E5%99%A8%EF%BC%9Finstead-of-%E5%92%8C-after-%E8%A7%A6%E5%8F%91%E5%99%A8%E7%9A%84%E5%8C%BA%E5%88%AB%EF%BC%9F**" tabindex="-1">49. 什么是触发器？INSTEAD OF 和 AFTER 触发器的区别？**</h2><ul><li><p><strong>触发器</strong>：一种特殊的存储过程，在特定数据库事件（INSERT/UPDATE/DELETE）发生时自动执行。</p></li><li><p><strong>AFTER 触发器（FOR 触发器）</strong>：</p><ul><li>在 <strong>DML 操作执行完成后</strong> 触发</li><li>可以访问 <code>inserted</code> 和 <code>deleted</code> 魔术表</li><li>常用于审计、日志记录、数据一致性检查</li></ul></li><li><p><strong>INSTEAD OF 触发器</strong>：</p><ul><li><strong>取代</strong> 原始的 DML 操作执行</li><li>在约束检查<strong>之前</strong>触发</li><li>常用于实现复杂的视图更新逻辑，或对不可更新视图进行更新</li></ul></li></ul><h2 id="50.-inserted-%E5%92%8C-deleted-%E9%AD%94%E6%9C%AF%E8%A1%A8%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F**" tabindex="-1">50. inserted 和 deleted 魔术表是什么？**</h2><p>这两个是触发器中的特殊内存表：</p><ul><li><strong>inserted</strong>：包含 INSERT 或 UPDATE 操作的<strong>新</strong>数据</li><li><strong>deleted</strong>：包含 DELETE 或 UPDATE 操作的<strong>旧</strong>数据</li></ul><pre><code class="language-sql">-- 在 UPDATE 触发器中CREATE TRIGGER trg_AuditUpdate ON Employees AFTER UPDATEASBEGIN    INSERT INTO AuditTable (EmployeeID, OldSalary, NewSalary)    SELECT d.EmployeeID, d.Salary, i.Salary    FROM deleted d     INNER JOIN inserted i ON d.EmployeeID = i.EmployeeID    WHERE d.Salary &lt;&gt; i.Salary;END;</code></pre><hr /><h2 id="51.-row_number()%E3%80%81rank()%E3%80%81dense_rank()-%E7%9A%84%E5%8C%BA%E5%88%AB%EF%BC%9F**" tabindex="-1">51. ROW_NUMBER()、RANK()、DENSE_RANK() 的区别？**</h2><p>这三个都是窗口函数，用于为结果集的行分配排名：</p><ul><li><strong>ROW_NUMBER()</strong>：为每一行分配一个<strong>唯一</strong>的连续序号（1, 2, 3, 4…）</li><li><strong>RANK()</strong>：相同的值获得相同排名，但会跳过后续排名（1, 2, 2, 4…）</li><li><strong>DENSE_RANK()</strong>：相同的值获得相同排名，但<strong>不跳过</strong>后续排名（1, 2, 2, 3…）</li></ul><pre><code class="language-sql">SELECT     Name, Score,    ROW_NUMBER() OVER (ORDER BY Score DESC) as RowNum,    RANK() OVER (ORDER BY Score DESC) as Rank,    DENSE_RANK() OVER (ORDER BY Score DESC) as DenseRankFROM Students;</code></pre><h2 id="52.-%E4%BB%80%E4%B9%88%E6%98%AF%E5%85%AC%E7%94%A8%E8%A1%A8%E8%A1%A8%E8%BE%BE%E5%BC%8F%EF%BC%88cte%EF%BC%89%E7%9A%84%E9%80%92%E5%BD%92%E6%9F%A5%E8%AF%A2%EF%BC%9F" tabindex="-1">52. 什么是公用表表达式（CTE）的递归查询？</h2><p>递归 CTE 用于处理层次结构数据（如组织结构、菜单树等）：</p><pre><code class="language-sql">-- 查询某个部门及其所有子部门WITH DepartmentCTE AS (    -- 锚定成员：根节点    SELECT DepartmentID, DepartmentName, ParentDepartmentID    FROM Departments    WHERE DepartmentID = @RootDepartmentID        UNION ALL        -- 递归成员：子节点    SELECT d.DepartmentID, d.DepartmentName, d.ParentDepartmentID    FROM Departments d    INNER JOIN DepartmentCTE cte ON d.ParentDepartmentID = cte.DepartmentID)SELECT * FROM DepartmentCTE;</code></pre><hr /><h2 id="53.-%E4%BB%80%E4%B9%88%E6%98%AF%E5%8F%82%E6%95%B0%E5%97%85%E6%8E%A2%E9%97%AE%E9%A2%98%EF%BC%9F%E5%A6%82%E4%BD%95%E8%A7%A3%E5%86%B3%EF%BC%9F" tabindex="-1">53. 什么是参数嗅探问题？如何解决？</h2><ul><li><p><strong>参数嗅探</strong>：SQL Server 在编译存储过程时，使用第一次执行时的参数值来生成执行计划。如果后续执行的参数值数据分布差异很大，可能导致性能问题。</p></li><li><p><strong>解决方案</strong>：</p><ul><li>使用 <code>OPTION (RECOMPILE)</code>：每次执行都重新编译</li><li>使用 <code>OPTION (OPTIMIZE FOR UNKNOWN)</code>：使用平均数据分布</li><li>使用局部变量：将参数赋值给局部变量，在查询中使用局部变量</li><li>使用 <code>WITH RECOMPILE</code> 选项创建存储过程</li></ul></li></ul><h2 id="54.-%E5%A6%82%E4%BD%95%E6%9F%A5%E6%89%BE%E5%92%8C%E4%BC%98%E5%8C%96%E6%85%A2%E6%9F%A5%E8%AF%A2%EF%BC%9F" tabindex="-1">54. 如何查找和优化慢查询？</h2><ul><li><p><strong>查找慢查询</strong>：</p><ul><li>使用 SQL Server Profiler</li><li>使用扩展事件（Extended Events）</li><li>查询动态管理视图（DMV）：</li></ul><pre><code class="language-sql">-- 查找最耗时的查询SELECT TOP 10     total_elapsed_time/execution_count AS avg_elapsed_time,    execution_count,    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,         ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(st.text)            ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) + 1) AS statement_textFROM sys.dm_exec_query_stats qsCROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) stORDER BY avg_elapsed_time DESC;</code></pre></li><li><p><strong>优化方法</strong>：</p><ul><li>添加合适的索引</li><li>重写查询逻辑</li><li>避免在 WHERE 子句中对字段进行函数操作</li><li>减少不必要的列查询</li></ul></li></ul><hr /><h2 id="55.-%E6%95%B0%E6%8D%AE%E5%BA%93%E7%9A%84%E4%B8%89%E5%A4%A7%E8%8C%83%E5%BC%8F%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">55. 数据库的三大范式是什么？</h2><ul><li><strong>第一范式（1NF）</strong>：每个列都是原子的，不可再分</li><li><strong>第二范式（2NF）</strong>：满足 1NF，且非主属性完全依赖于主键（消除部分依赖）</li><li><strong>第三范式（3NF）</strong>：满足 2NF，且非主属性之间没有传递依赖</li></ul><h2 id="56.-%E4%BB%80%E4%B9%88%E6%97%B6%E5%80%99%E5%BA%94%E8%AF%A5%E5%8F%8D%E8%8C%83%E5%BC%8F%E5%8C%96%EF%BC%9F" tabindex="-1">56. 什么时候应该反范式化？</h2><p>虽然范式化减少了数据冗余，但在以下情况可以考虑反范式化：</p><ul><li><strong>频繁的 JOIN 操作</strong>影响性能时</li><li>需要<strong>提高查询性能</strong>的读密集型场景</li><li>数据仓库或报表数据库</li><li>历史数据表，数据不再变更</li></ul><hr /><h2 id="57.-%E5%AE%8C%E6%95%B4%E5%A4%87%E4%BB%BD%E3%80%81%E5%B7%AE%E5%BC%82%E5%A4%87%E4%BB%BD%E5%92%8C%E4%BA%8B%E5%8A%A1%E6%97%A5%E5%BF%97%E5%A4%87%E4%BB%BD%E7%9A%84%E5%8C%BA%E5%88%AB%EF%BC%9F" tabindex="-1">57. 完整备份、差异备份和事务日志备份的区别？</h2><ul><li><strong>完整备份</strong>：备份整个数据库，是其他备份的基础</li><li><strong>差异备份</strong>：只备份自上次完整备份以来发生变化的数据页</li><li><strong>事务日志备份</strong>：备份事务日志，允许时间点恢复</li></ul><p><strong>恢复场景示例</strong>：</p><pre><code class="language-">完整备份 (周日) → 差异备份 (周一) → 日志备份 (周二 10:00) → 日志备份 (周二 11:00)</code></pre><p>如果周二 11:30 发生故障，可以恢复到：周日完整备份 + 周一差异备份 + 周二 10:00 日志 + 周二 11:00 日志</p><h2 id="58.-%E7%AE%80%E5%8D%95%E6%81%A2%E5%A4%8D%E6%A8%A1%E5%BC%8F-vs-%E5%AE%8C%E6%95%B4%E6%81%A2%E5%A4%8D%E6%A8%A1%E5%BC%8F" tabindex="-1">58. 简单恢复模式 vs 完整恢复模式</h2><ul><li><p><strong>简单恢复模式</strong>：</p><ul><li>不备份事务日志</li><li>不能进行时间点恢复</li><li>日志空间自动回收</li><li>适合测试环境或可接受数据丢失的场景</li></ul></li><li><p><strong>完整恢复模式</strong>：</p><ul><li>需要定期备份事务日志</li><li>支持时间点恢复</li><li>可以防止数据丢失</li><li>生产环境推荐使用</li></ul></li></ul><hr /><h2 id="59.-%E5%A6%82%E4%BD%95%E8%AE%BE%E8%AE%A1%E4%B8%80%E4%B8%AA%E6%94%AF%E6%8C%81%E8%BD%AF%E5%88%A0%E9%99%A4%E7%9A%84%E7%B3%BB%E7%BB%9F%EF%BC%9F" tabindex="-1">59. 如何设计一个支持软删除的系统？</h2><pre><code class="language-sql">-- 在表中添加删除标记字段ALTER TABLE Products ADD IsDeleted BIT NOT NULL DEFAULT 0;ALTER TABLE Products ADD DeletedDate DATETIME NULL;-- 使用视图过滤已删除的记录CREATE VIEW vw_ActiveProducts ASSELECT * FROM Products WHERE IsDeleted = 0;-- 使用 INSTEAD OF DELETE 触发器实现软删除CREATE TRIGGER trg_SoftDeleteProductON Products INSTEAD OF DELETEASBEGIN    UPDATE Products     SET IsDeleted = 1, DeletedDate = GETDATE()    WHERE ProductID IN (SELECT ProductID FROM deleted);END;</code></pre><h2 id="60.-%E5%A6%82%E4%BD%95%E5%A4%84%E7%90%86%E6%95%B0%E6%8D%AE%E5%BA%93%E4%B8%AD%E7%9A%84%E5%BE%AA%E7%8E%AF%E5%BC%95%E7%94%A8%EF%BC%9F" tabindex="-1">60. 如何处理数据库中的循环引用？</h2><ul><li><p><strong>方案1</strong>：使用延迟约束检查</p><pre><code class="language-sql">ALTER TABLE TableA ADD CONSTRAINT FK_TableA_TableB FOREIGN KEY (BID) REFERENCES TableB(BID)-- 在某些版本中可以使用 DEFERRABLE</code></pre></li><li><p><strong>方案2</strong>：允许 NULL 值，先插入部分数据再更新</p></li><li><p><strong>方案3</strong>：使用触发器代替外键约束</p></li><li><p><strong>方案4</strong>：重新设计表结构，消除循环引用</p></li></ul><h2 id="61.-%E5%A6%82%E4%BD%95%E5%AE%9E%E7%8E%B0%E6%95%B0%E6%8D%AE%E5%BA%93%E7%9A%84%E5%AE%A1%E8%AE%A1%E5%8A%9F%E8%83%BD%EF%BC%9F" tabindex="-1">61. 如何实现数据库的审计功能？</h2><ul><li><strong>方法1</strong>：使用触发器记录数据变更</li><li><strong>方法2</strong>：使用 SQL Server 的变更数据捕获（CDC）功能</li><li><strong>方法3</strong>：使用 SQL Server Audit 功能（企业版）</li><li><strong>方法4</strong>：在应用层实现审计逻辑</li></ul><hr /><h2 id="62.-sql-server-2019-%E7%9A%84%E6%96%B0%E7%89%B9%E6%80%A7%E6%9C%89%E5%93%AA%E4%BA%9B%EF%BC%9F" tabindex="-1">62. SQL Server 2019 的新特性有哪些？</h2><ul><li><strong>智能查询处理</strong>：自适应连接、行模式内存授予反馈等</li><li><strong>数据虚拟化</strong>：通过 PolyBase 查询外部数据源</li><li><strong>Java 语言扩展</strong>：在 SQL Server 中执行 Java 代码</li><li><strong>加速数据库恢复</strong>：大幅减少数据库恢复时间</li><li><strong>列存储索引增强</strong>：可更新的非聚集列存储索引</li></ul><h2 id="63.-%E4%BB%80%E4%B9%88%E6%98%AF%E5%86%85%E5%AD%98%E4%BC%98%E5%8C%96%E8%A1%A8%EF%BC%9F%E9%80%82%E7%94%A8%E5%9C%BA%E6%99%AF%EF%BC%9F" tabindex="-1">63. 什么是内存优化表？适用场景？</h2><p>内存优化表将数据完全存储在内存中，提供极高的吞吐量：</p><ul><li><p><strong>适用场景</strong>：</p><ul><li>高频读写的高并发场景</li><li>会话状态管理</li><li>实时数据处理</li><li>需要亚毫秒级响应的应用</li></ul></li><li><p><strong>创建示例</strong>：</p></li></ul><pre><code class="language-sql">CREATE TABLE dbo.SessionState(    SessionID nvarchar(64) NOT NULL PRIMARY KEY NONCLUSTERED,    UserData varbinary(MAX) NOT NULL,    CreatedDate datetime2 NOT NULL) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);</code></pre><hr /><h1 id="%E6%95%B0%E6%8D%AE%E5%BA%93%E8%A1%A8%E9%A2%98" tabindex="-1">数据库表题</h1><h2 id="1.-%E7%94%B5%E5%95%86%E4%BA%A7%E5%93%81%E6%9C%89%E9%A2%9C%E8%89%B2(%E7%BA%A2%E3%80%81%E8%93%9D%E3%80%81%E9%BB%91)%E3%80%81%E5%B0%BA%E5%AF%B8(s%E3%80%81m%E3%80%81l)%E7%AD%89%E5%8F%98%E4%BD%93%E4%BF%A1%E6%81%AF%EF%BC%8C%E4%B9%B0%E5%AE%B6%E8%B4%AD%E4%B9%B0%E4%B8%80%E4%B8%AA%E7%BA%A2%E8%89%B2%E4%B8%AD%E7%A0%81%E7%9A%84%E5%95%86%E5%93%81%E5%AF%B9%E5%BA%94%E7%9A%84%E5%8F%98%E4%BD%93%E5%90%8D%E4%B8%BA%EF%BC%9A%E7%BA%A2_m%E3%80%82%E9%82%A3%E4%B9%88%E7%BB%99%E5%AE%9A%E4%B8%80%E7%BB%84%E5%8F%98%E4%BD%93%E4%BF%A1%E6%81%AF%EF%BC%8C%E7%94%A8%E7%A8%8B%E5%BA%8F%E7%94%9F%E6%88%90%E6%89%80%E6%9C%89%E5%8F%98%E4%BD%93%E7%BB%84%E5%90%88%E6%95%B0%E6%8D%AE%E3%80%82" tabindex="-1">1. 电商产品有颜色(红、蓝、黑)、尺寸(S、M、L)等变体信息，买家购买一个红色中码的商品对应的变体名为：红_M。那么给定一组变体信息，用程序生成所有变体组合数据。</h2><p>变体示例：Color: Red,Green Size: S,M Style: A<br />变体组合结果：Red_S_A; Red_M_A; Green_S_A; Green_M_A</p><pre><code class="language-csharp">//测试用例 var list = new List&lt;string[]&gt;{ new string[]{&quot;Red&quot;,&quot;Green&quot;}, new string[]{&quot;S&quot;,&quot;M&quot;}, new string[]{&quot;A&quot;}};var result = Combine(list);//期望result为：Red_S_A; Red_M_A; Green_S_A; Green_M_Apublic List&lt;string&gt; Combine(List&lt;string[]&gt; list){ … }</code></pre><ul><li><strong>答案</strong>：</li></ul><pre><code class="language-csharp">public static List&lt;string&gt; Combine(List&lt;string[]&gt; list){    List&lt;string&gt; result = new List&lt;string&gt;();    int[] indices = new int[list.Count]; // 用于跟踪每个字符串数组中当前选取的元素的索引    while (true)    {        string combined = &quot;&quot;;        for (int i = 0; i &lt; list.Count; i++)        {            combined += &quot;_&quot; + list[i][indices[i]]; // 将当前索引对应的元素添加到组合中        }        result.Add(combined); // 将组合添加到结果列表中        // 更新索引        int j = list.Count - 1;        while (j &gt;= 0 &amp;&amp; indices[j] == list[j].Length - 1)        {            indices[j] = 0;            j--;        }        // 检查是否所有索引都已经达到最大值        if (j &lt; 0)        {            break;        }        indices[j]++; // 增加索引    }    return result;}</code></pre><h2 id="2.-%E5%86%85%E9%83%A8%E7%B3%BB%E7%BB%9F%E7%9A%84%E4%BA%A7%E5%93%81%E5%BA%93%E4%B8%AD%E6%9C%89%E4%B8%80%E4%B8%AA%E4%BA%A7%E5%93%81%E6%8B%A5%E6%9C%89%E4%B8%89%E4%B8%AA%E7%BB%B4%E5%BA%A6%E7%9A%84%E5%8F%98%E4%BD%93%EF%BC%8C%E5%88%86%E5%88%AB%E6%98%AF%EF%BC%9Acolor%E3%80%81size-%E5%92%8C-style%E3%80%82%E7%8E%B0%E5%9C%A8%E8%A6%81%E5%B0%86%E5%85%B6%E4%B8%8A%E4%BC%A0%E5%88%B0%E5%B9%B3%E5%8F%B0-a%EF%BC%8C%E4%BD%86%E5%B9%B3%E5%8F%B0-a-%E4%BB%85%E6%94%AF%E6%8C%81%E4%B8%A4%E4%B8%AA%E7%BB%B4%E5%BA%A6%E7%9A%84%E5%8F%98%E4%BD%93%EF%BC%8C%E5%9B%A0%E6%AD%A4%E9%9C%80%E8%A6%81%E5%AF%B9%E5%8F%98%E4%BD%93%E8%BF%9B%E8%A1%8C%E9%99%8D%E7%BB%B4%E3%80%82%E9%82%A3%E4%B9%88%E7%BB%99%E5%AE%9A%E4%B8%80%E7%BB%84%E5%8F%98%E4%BD%93%E4%BF%A1%E6%81%AF%EF%BC%8C%E7%94%A8%E7%A8%8B%E5%BA%8F%E5%AE%9E%E7%8E%B0%E5%8F%98%E4%BD%93%E9%99%8D%E7%BB%B4%E6%93%8D%E4%BD%9C%E3%80%82" tabindex="-1">2. 内部系统的产品库中有一个产品拥有三个维度的变体，分别是：Color、Size 和 Style。现在要将其上传到平台 A，但平台 A 仅支持两个维度的变体，因此需要对变体进行降维。那么给定一组变体信息，用程序实现变体降维操作。</h2><p>变体示例: Color: Red,Green Size: S,M Style: A,B</p><p>降维后:Color: Red,Green Size: S_A,S_B,M_A,M_B</p><pre><code class="language-csharp">var pair = new Dictionary&lt;string, List&lt;string&gt;&gt; { {&quot;Color&quot;,new List&lt;string&gt;{ &quot;Red&quot;,&quot;Green&quot; }}, {&quot;Size&quot;,new List&lt;string&gt;{ &quot;S&quot;,&quot;M&quot; }}, {&quot;Style&quot;,new List&lt;string&gt;{ &quot;A&quot;,&quot;B&quot; }},};var result = Reduce(pair);public Dictionary&lt;string, List&lt;string&gt;&gt; Reduce(Dictionary&lt;string, List&lt;string&gt;&gt; pair){...}</code></pre><ul><li><strong>答案</strong>：</li></ul><pre><code class="language-csharp">class Program { static void Main() { // 定义变体维度 string[] colors = { &quot;Red&quot;, &quot;Green&quot; }; string[] sizes = { &quot;S&quot;, &quot;M&quot; }; string[] styles = { &quot;A&quot;, &quot;B&quot; };    // 降维操作    Dictionary&lt;string, string[]&gt; reducedDimensions = ReduceDimensions(colors, sizes, styles);        // 打印降维后的变体    Console.WriteLine(&quot;Color: &quot; + string.Join(&quot;,&quot;, reducedDimensions[&quot;Color&quot;]));    Console.WriteLine(&quot;Size: &quot; + string.Join(&quot;,&quot;, reducedDimensions[&quot;Size&quot;]));}static Dictionary&lt;string, string[]&gt; ReduceDimensions(string[] colors, string[] sizes, string[] styles){    var reduced = new Dictionary&lt;string, string[]&gt;    {        { &quot;Color&quot;, colors },        { &quot;Size&quot;, sizes.SelectMany(size =&gt; styles.Select(style =&gt; size + &quot;_&quot; + style)).ToArray() }    };    return reduced;}}</code></pre><h2 id="3.-%E8%AF%95%E7%94%A8sql%E6%9F%A5%E8%AF%A2%E8%AF%AD%E5%8F%A5%E8%A1%A8%E8%BE%BE%E4%B8%8B%E5%88%97%E5%AF%B9%E6%95%99%E5%AD%A6%E6%95%B0%E6%8D%AE%E5%BA%93%E4%B8%AD%E4%B8%89%E4%B8%AA%E5%9F%BA%E6%9C%AC%E8%A1%A8-s%E3%80%81sc-%E3%80%81c-%E7%9A%84%E6%9F%A5%E8%AF%A2%EF%BC%9A" tabindex="-1">3. 试用SQL查询语句表达下列对教学数据库中三个基本表 S、SC 、C 的查询：</h2><ul><li><code>S(sno, sname, sage, ssex)</code>：学号、姓名、年龄、性别</li><li><code>SC(sno, cno, grade)</code>：学号、课程号、成绩</li><li><code>C(cno, cname, teacher)</code>：课程号、课程名、教师名</li></ul><h3 id="3.1.-%E6%B1%82%E5%B9%B4%E9%BE%84%E5%A4%A7%E4%BA%8E%E6%89%80%E6%9C%89%E5%A5%B3%E5%90%8C%E5%AD%A6%E5%B9%B4%E9%BE%84%E7%9A%84%E7%94%B7%E5%AD%A6%E7%94%9F%E5%A7%93%E5%90%8D%E5%92%8C%E5%B9%B4%E9%BE%84" tabindex="-1">3.1. 求年龄大于所有女同学年龄的男学生姓名和年龄</h3><pre><code class="language-sql">SELECT sname, sage FROM S AS XWHERE x.ssex = &#39;男&#39; AND x.sage &gt; ALL (  SELECT sage FROM S AS Y WHERE y.ssex = &#39;女&#39;);</code></pre><h3 id="3.2.-%E6%B1%82%E5%B9%B4%E9%BE%84%E5%A4%A7%E4%BA%8E%E5%A5%B3%E5%90%8C%E5%AD%A6%E5%B9%B3%E5%9D%87%E5%B9%B4%E9%BE%84%E7%9A%84%E7%94%B7%E5%AD%A6%E7%94%9F%E5%A7%93%E5%90%8D%E5%92%8C%E5%B9%B4%E9%BE%84" tabindex="-1">3.2. 求年龄大于女同学平均年龄的男学生姓名和年龄</h3><pre><code class="language-sql">SELECT sname, sage FROM SWHERE ssex = &#39;男&#39; AND sage &gt; (  SELECT AVG(sage) FROM S WHERE ssex = &#39;女&#39;);</code></pre><h3 id="3.3.-%E5%9C%A8sc%E4%B8%AD%E6%A3%80%E7%B4%A2%E6%88%90%E7%BB%A9%E4%B8%BA%E7%A9%BA%E5%80%BC%E7%9A%84%E5%AD%A6%E7%94%9F%E5%AD%A6%E5%8F%B7%E5%92%8C%E8%AF%BE%E7%A8%8B%E5%8F%B7" tabindex="-1">3.3. 在SC中检索成绩为空值的学生学号和课程号</h3><pre><code class="language-sql">SELECT sno, cno FROM SC WHERE grade IS NULL;</code></pre><h3 id="3.4.-%E6%A3%80%E7%B4%A2%E5%A7%93%E5%90%8D%E4%BB%A5wang%E6%89%93%E5%A4%B4%E7%9A%84%E6%89%80%E6%9C%89%E5%AD%A6%E7%94%9F%E7%9A%84%E5%A7%93%E5%90%8D%E5%92%8C%E5%B9%B4%E9%BE%84" tabindex="-1">3.4. 检索姓名以WANG打头的所有学生的姓名和年龄</h3><pre><code class="language-sql">SELECT sname, sage FROM S WHERE sname LIKE &#39;WANG%&#39;;</code></pre><h3 id="3.5.-%E6%A3%80%E7%B4%A2%E5%AD%A6%E5%8F%B7%E6%AF%94wang%E5%90%8C%E5%AD%A6%E5%A4%A7%EF%BC%8C%E8%80%8C%E5%B9%B4%E9%BE%84%E6%AF%94%E4%BB%96%E5%B0%8F%E7%9A%84%E5%AD%A6%E7%94%9F%E5%A7%93%E5%90%8D" tabindex="-1">3.5. 检索学号比WANG同学大，而年龄比他小的学生姓名</h3><pre><code class="language-sql">SELECT sname FROM sWHERE sno &gt; (SELECT sno FROM s WHERE sname = &#39;WANG&#39;)  AND sage &lt; (SELECT sage FROM s WHERE sname = &#39;WANG&#39;);</code></pre><h3 id="3.6.-%E7%BB%9F%E8%AE%A1%E6%AF%8F%E9%97%A8%E8%AF%BE%E7%A8%8B%E7%9A%84%E5%AD%A6%E7%94%9F%E9%80%89%E4%BF%AE%E4%BA%BA%E6%95%B0%EF%BC%88%E8%B6%85%E8%BF%872%E4%BA%BA%E7%9A%84%E8%AF%BE%E7%A8%8B%E6%89%8D%E7%BB%9F%E8%AE%A1%EF%BC%89" tabindex="-1">3.6. 统计每门课程的学生选修人数（超过2人的课程才统计）</h3><pre><code class="language-sql">SELECT cno, COUNT(sno) AS 人数 FROM SCGROUP BY cno HAVING COUNT(sno) &gt; 2ORDER BY 人数 DESC, cno ASC;</code></pre><h3 id="3.7.-%E6%B1%82liu%E8%80%81%E5%B8%88%E6%89%80%E6%8E%88%E8%AF%BE%E7%A8%8B%E7%9A%84%E6%AF%8F%E9%97%A8%E8%AF%BE%E7%A8%8B%E7%9A%84%E5%AD%A6%E7%94%9F%E5%B9%B3%E5%9D%87%E6%88%90%E7%BB%A9" tabindex="-1">3.7. 求LIU老师所授课程的每门课程的学生平均成绩</h3><pre><code class="language-sql">SELECT cname, AVG(grade) FROM SC, CWHERE SC.cno = C.cno AND teacher = &#39;liu&#39;GROUP BY c.cno, cname;</code></pre><h3 id="3.8.-%E6%B1%82%E9%80%89%E4%BF%AEc4%E8%AF%BE%E7%A8%8B%E7%9A%84%E5%AD%A6%E7%94%9F%E7%9A%84%E5%B9%B3%E5%9D%87%E5%B9%B4%E9%BE%84" tabindex="-1">3.8. 求选修C4课程的学生的平均年龄</h3><pre><code class="language-sql">SELECT AVG(sage) FROM S, SCWHERE S.sno = SC.sno AND cno = &#39;4&#39;;</code></pre><h3 id="3.9.-%E7%BB%9F%E8%AE%A1%E6%9C%89%E5%AD%A6%E7%94%9F%E9%80%89%E4%BF%AE%E7%9A%84%E8%AF%BE%E7%A8%8B%E9%97%A8%E6%95%B0" tabindex="-1">3.9. 统计有学生选修的课程门数</h3><pre><code class="language-sql">SELECT COUNT(DISTINCT cno) FROM SC;</code></pre><h3 id="3.10.-%E5%9C%A8%E5%9F%BA%E6%9C%AC%E8%A1%A8sc%E4%B8%AD%E4%BF%AE%E6%94%B94%E5%8F%B7%E8%AF%BE%E7%A8%8B%E7%9A%84%E6%88%90%E7%BB%A9" tabindex="-1">3.10. 在基本表SC中修改4号课程的成绩</h3><pre><code class="language-sql">UPDATE SC SET grade = grade * 1.05 WHERE cno = &#39;4&#39; AND grade &lt;= 75;UPDATE SC SET grade = grade * 1.04 WHERE cno = &#39;4&#39; AND grade &gt; 75;</code></pre><h3 id="3.11.-%E6%8A%8A%E4%BD%8E%E4%BA%8E%E6%80%BB%E5%B9%B3%E5%9D%87%E6%88%90%E7%BB%A9%E7%9A%84%E5%A5%B3%E5%90%8C%E5%AD%A6%E6%88%90%E7%BB%A9%E6%8F%90%E9%AB%985%25" tabindex="-1">3.11. 把低于总平均成绩的女同学成绩提高5%</h3><pre><code class="language-sql">UPDATE SC SET grade = grade * 1.05WHERE grade &lt; (SELECT AVG(grade) FROM SC)  AND sno IN (SELECT sno FROM S WHERE ssex = &#39;女&#39;);</code></pre><h3 id="3.12.-%E6%8A%8A%E9%80%89%E4%BF%AE%E6%95%B0%E6%8D%AE%E5%BA%93%E5%8E%9F%E7%90%86%E8%AF%BE%E4%B8%8D%E5%8F%8A%E6%A0%BC%E7%9A%84%E6%88%90%E7%BB%A9%E6%94%B9%E4%B8%BA%E7%A9%BA%E5%80%BC" tabindex="-1">3.12. 把选修数据库原理课不及格的成绩改为空值</h3><pre><code class="language-sql">UPDATE SC SET grade = NULLWHERE grade &lt; 60 AND cno IN (  SELECT cno FROM C WHERE cname = &#39;数据库原理&#39;);</code></pre><h3 id="3.13.-%E6%8A%8Awang%E5%90%8C%E5%AD%A6%E7%9A%84%E5%AD%A6%E4%B9%A0%E9%80%89%E8%AF%BE%E5%92%8C%E6%88%90%E7%BB%A9%E5%85%A8%E9%83%A8%E5%88%A0%E5%8E%BB" tabindex="-1">3.13. 把WANG同学的学习选课和成绩全部删去</h3><pre><code class="language-sql">DELETE FROM SC WHERE sno IN (  SELECT sno FROM S WHERE sname = &#39;WANG&#39;);</code></pre><h3 id="3.14.-%E5%9C%A8%E5%9F%BA%E6%9C%AC%E8%A1%A8sc%E4%B8%AD%E5%88%A0%E9%99%A4%E5%B0%9A%E6%97%A0%E6%88%90%E7%BB%A9%E7%9A%84%E9%80%89%E8%AF%BE%E5%85%83%E7%BB%84" tabindex="-1">3.14. 在基本表SC中删除尚无成绩的选课元组</h3><pre><code class="language-sql">DELETE FROM SC WHERE grade IS NULL;</code></pre><h3 id="3.15.-%E5%9C%A8%E5%9F%BA%E6%9C%AC%E8%A1%A8s%E4%B8%AD%E6%8F%92%E5%85%A5%E4%B8%80%E4%B8%AA%E5%AD%A6%E7%94%9F%E5%85%83%E7%BB%84" tabindex="-1">3.15. 在基本表S中插入一个学生元组</h3><pre><code class="language-sql">INSERT INTO S(sno, sname, sage) VALUES(&#39;S9&#39;, &#39;WU&#39;, 18);</code></pre>]]>
                    </description>
                    <pubDate>Fri, 14 Nov 2025 03:53:59 EST</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[MySql入门：备份恢复与安全管理]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2953</link>
                    <description>
                            <![CDATA[<h1 id="mysql%E5%A4%87%E4%BB%BD%E6%81%A2%E5%A4%8D%E4%B8%8E%E5%AE%89%E5%85%A8%E7%AE%A1%E7%90%86" tabindex="-1">MySQL备份恢复与安全管理</h1><blockquote><p>数据是企业的核心资产，确保数据安全性和可恢复性是DBA最重要的职责。今天，我们将深入探讨MySQL的备份恢复策略和安全管理制度，帮助你构建既安全又可靠的数据库环境。</p></blockquote><h2 id="1.-%E5%A4%87%E4%BB%BD%E7%AD%96%E7%95%A5%E4%B8%8E%E5%AE%9E%E6%96%BD" tabindex="-1">1. 备份策略与实施</h2><h3 id="%E9%80%BB%E8%BE%91%E5%A4%87%E4%BB%BD%EF%BC%9Amysqldump%E5%AE%9E%E7%94%A8%E6%8A%80%E5%B7%A7" tabindex="-1">逻辑备份：mysqldump实用技巧</h3><p><strong>基础备份命令：</strong></p><pre><code class="language-bash"># 完整数据库备份mysqldump -u root -p --all-databases --single-transaction --master-data=2 --flush-logs &gt; full_backup_$(date +%Y%m%d).sql# 单个数据库备份mysqldump -u root -p --databases company --single-transaction --routines --triggers --events &gt; company_backup_$(date +%Y%m%d).sql# 单个表备份mysqldump -u root -p company employees departments --single-transaction --where=&quot;salary&gt;5000&quot; &gt; high_salary_employees.sql# 压缩备份mysqldump -u root -p --all-databases --single-transaction | gzip &gt; full_backup_$(date +%Y%m%d).sql.gz</code></pre><p><strong>高级备份选项：</strong></p><pre><code class="language-bash"># 生产环境完整备份脚本mysqldump -u backup_user -p&#39;secure_password&#39; \  --all-databases \  --single-transaction \  --master-data=2 \  --flush-logs \  --routines \  --triggers \  --events \  --hex-blob \  --complete-insert \  --extended-insert \  --max-allowed-packet=1G \  --set-gtid-purged=ON \  --result-file=/backup/full_backup_$(date +%Y%m%d_%H%M%S).sql# 分库备份脚本for DB in $(mysql -u root -p&#39;password&#39; -e &quot;SHOW DATABASES;&quot; | grep -Ev &quot;(Database|information_schema|performance_schema|sys)&quot;)do    mysqldump -u root -p&#39;password&#39; --databases $DB --single-transaction --routines --triggers &gt; /backup/${DB}_backup_$(date +%Y%m%d).sqldone</code></pre><p><strong>备份验证脚本：</strong></p><pre><code class="language-bash">#!/bin/bash# backup_verify.shBACKUP_FILE=$1LOG_FILE=&quot;/var/log/mysql/backup_verify.log&quot;log() {    echo &quot;$(date &#39;+%Y-%m-%d %H:%M:%S&#39;) - $1&quot; &gt;&gt; $LOG_FILE}verify_backup() {    local file=$1        log &quot;开始验证备份文件: $file&quot;        # 检查文件是否存在    if [ ! -f &quot;$file&quot; ]; then        log &quot;错误: 备份文件不存在 - $file&quot;        return 1    fi        # 检查文件大小    local file_size=$(stat -f%z &quot;$file&quot; 2&gt;/dev/null || stat -c%s &quot;$file&quot; 2&gt;/dev/null)    if [ &quot;$file_size&quot; -lt 1024 ]; then        log &quot;错误: 备份文件过小 - $file&quot;        return 1    fi        # 验证SQL文件完整性    if [[ &quot;$file&quot; == *.sql ]]; then        # 检查SQL文件头        if ! head -n 10 &quot;$file&quot; | grep -q &quot;MySQL dump&quot;; then            log &quot;错误: 无效的SQL备份文件 - $file&quot;            return 1        fi                # 检查SQL文件尾        if ! tail -n 5 &quot;$file&quot; | grep -q &quot;Dump completed&quot;; then            log &quot;警告: 备份文件可能不完整 - $file&quot;        fi    fi        # 验证压缩文件    if [[ &quot;$file&quot; == *.gz ]]; then        if ! gzip -t &quot;$file&quot; 2&gt;/dev/null; then            log &quot;错误: 压缩文件损坏 - $file&quot;            return 1        fi    fi        log &quot;备份文件验证通过: $file&quot;    return 0}# 执行验证verify_backup &quot;$BACKUP_FILE&quot;exit $?</code></pre><h3 id="%E7%89%A9%E7%90%86%E5%A4%87%E4%BB%BD%EF%BC%9Axtrabackup%E5%AE%9E%E6%88%98" tabindex="-1">物理备份：XtraBackup实战</h3><p><strong>完整备份与恢复：</strong></p><pre><code class="language-bash"># 安装XtraBackup# Ubuntu/Debianwget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.debsudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.debsudo apt-get updatesudo apt-get install percona-xtrabackup-80# 完整备份xtrabackup --backup --user=backup_user --password=&#39;secure_password&#39; --target-dir=/backup/full_$(date +%Y%m%d_%H%M%S)# 准备备份（应用日志）xtrabackup --prepare --target-dir=/backup/full_20231201_120000# 恢复备份systemctl stop mysqlmv /var/lib/mysql /var/lib/mysql_oldxtrabackup --copy-back --target-dir=/backup/full_20231201_120000chown -R mysql:mysql /var/lib/mysqlsystemctl start mysql</code></pre><p><strong>增量备份策略：</strong></p><pre><code class="language-bash">#!/bin/bash# incremental_backup.shBASE_DIR=&quot;/backup&quot;FULL_BACKUP_DIR=&quot;$BASE_DIR/full_$(date +%Y%m%d)&quot;INCREMENTAL_DIR=&quot;$BASE_DIR/inc_$(date +%Y%m%d_%H%M%S)&quot;BACKUP_USER=&quot;backup_user&quot;BACKUP_PASSWORD=&quot;secure_password&quot;LOG_FILE=&quot;/var/log/mysql/xtrabackup.log&quot;log() {    echo &quot;$(date &#39;+%Y-%m-%d %H:%M:%S&#39;) - $1&quot; &gt;&gt; $LOG_FILE}# 检查基础备份是否存在find_base_backup() {    find $BASE_DIR -name &quot;full_*&quot; -type d | sort -r | head -1}perform_full_backup() {    log &quot;开始完整备份&quot;    xtrabackup --backup --user=$BACKUP_USER --password=$BACKUP_PASSWORD --target-dir=$FULL_BACKUP_DIR    if [ $? -eq 0 ]; then        log &quot;完整备份完成: $FULL_BACKUP_DIR&quot;        echo $FULL_BACKUP_DIR &gt; $BASE_DIR/latest_full_backup    else        log &quot;完整备份失败&quot;        exit 1    fi}perform_incremental_backup() {    local base_dir=$1    log &quot;开始增量备份，基于: $base_dir&quot;        xtrabackup --backup --user=$BACKUP_USER --password=$BACKUP_PASSWORD \        --target-dir=$INCREMENTAL_DIR \        --incremental-basedir=$base_dir        if [ $? -eq 0 ]; then        log &quot;增量备份完成: $INCREMENTAL_DIR&quot;    else        log &quot;增量备份失败&quot;        exit 1    fi}# 主逻辑BASE_BACKUP=$(find_base_backup)if [ -z &quot;$BASE_BACKUP&quot; ] || [ $(find $BASE_BACKUP -name &quot;xtrabackup_checkpoints&quot; -mtime +7 | wc -l) -gt 0 ]; then    # 没有基础备份或基础备份超过7天，执行完整备份    perform_full_backupelse    # 执行增量备份    perform_incremental_backup $BASE_BACKUPfi</code></pre><p><strong>备份恢复演练：</strong></p><pre><code class="language-bash">#!/bin/bash# disaster_recovery_drill.shRECOVERY_DIR=&quot;/recovery&quot;BACKUP_SOURCE=&quot;/backup&quot;MYSQL_DATA_DIR=&quot;/var/lib/mysql&quot;LOG_FILE=&quot;/var/log/mysql/recovery_drill.log&quot;log() {    echo &quot;$(date &#39;+%Y-%m-%d %H:%M:%S&#39;) - $1&quot; &gt;&gt; $LOG_FILE}prepare_recovery_environment() {    log &quot;准备恢复环境&quot;        # 停止MySQL服务    systemctl stop mysql        # 备份当前数据    mv $MYSQL_DATA_DIR ${MYSQL_DATA_DIR}_backup_$(date +%Y%m%d_%H%M%S)        # 创建恢复目录    mkdir -p $RECOVERY_DIR}restore_from_backup() {    local backup_dir=$1        log &quot;从备份恢复: $backup_dir&quot;        # 准备备份    xtrabackup --prepare --apply-log-only --target-dir=$backup_dir        # 恢复备份    xtrabackup --copy-back --target-dir=$backup_dir        # 设置权限    chown -R mysql:mysql $MYSQL_DATA_DIR}verify_recovery() {    log &quot;验证恢复结果&quot;        # 启动MySQL    systemctl start mysql        # 等待服务启动    sleep 30        # 基础验证    if mysql -u root -p&#39;password&#39; -e &quot;SELECT 1;&quot; &gt; /dev/null 2&gt;&amp;1; then        log &quot;MySQL服务启动成功&quot;                # 验证关键表        local table_count=$(mysql -u root -p&#39;password&#39; -N -e &quot;SELECT COUNT(*) FROM information_schema.tables WHERE table_schema NOT IN (&#39;mysql&#39;,&#39;information_schema&#39;,&#39;performance_schema&#39;,&#39;sys&#39;);&quot;)        log &quot;发现 $table_count 个用户表&quot;                # 验证数据完整性        mysql -u root -p&#39;password&#39; -e &quot;CHECK TABLE company.employees EXTENDED;&quot; &gt;&gt; $LOG_FILE                return 0    else        log &quot;MySQL服务启动失败&quot;        return 1    fi}# 执行恢复演练prepare_recovery_environment# 查找最新的完整备份LATEST_FULL_BACKUP=$(find $BACKUP_SOURCE -name &quot;full_*&quot; -type d | sort -r | head -1)if [ -n &quot;$LATEST_FULL_BACKUP&quot; ]; then    restore_from_backup $LATEST_FULL_BACKUP    verify_recoveryelse    log &quot;错误: 未找到完整备份&quot;    exit 1fi</code></pre><h3 id="%E5%A2%9E%E9%87%8F%E5%A4%87%E4%BB%BD%E4%B8%8E%E5%B7%AE%E5%BC%82%E5%A4%87%E4%BB%BD" tabindex="-1">增量备份与差异备份</h3><p><strong>二进制日志备份：</strong></p><pre><code class="language-sql">-- 启用二进制日志-- 在my.cnf中配置/*[mysqld]log_bin = /var/lib/mysql/mysql-binexpire_logs_days = 7max_binlog_size = 100M*/-- 查看二进制日志状态SHOW BINARY LOGS;/*+------------------+-----------+| Log_name         | File_size |+------------------+-----------+| mysql-bin.000001 |       194 || mysql-bin.000002 |       456 || mysql-bin.000003 |       123 |+------------------+-----------+*/-- 刷新日志（创建新的二进制日志文件）FLUSH BINARY LOGS;-- 查看当前正在使用的二进制日志SHOW MASTER STATUS;</code></pre><p><strong>自动化二进制日志备份：</strong></p><pre><code class="language-bash">#!/bin/bash# binlog_backup.shMYSQL_USER=&quot;backup_user&quot;MYSQL_PASSWORD=&quot;secure_password&quot;BACKUP_DIR=&quot;/backup/binlog&quot;LOG_FILE=&quot;/var/log/mysql/binlog_backup.log&quot;RETENTION_DAYS=7log() {    echo &quot;$(date &#39;+%Y-%m-%d %H:%M:%S&#39;) - $1&quot; &gt;&gt; $LOG_FILE}backup_binlog() {    log &quot;开始二进制日志备份&quot;        # 获取当前二进制日志文件    CURRENT_BINLOG=$(mysql -u $MYSQL_USER -p$MYSQL_PASSWORD -N -e &quot;SHOW MASTER STATUS&quot; | awk &#39;{print $1}&#39;)        # 备份所有未备份的二进制日志    for BINLOG in $(mysql -u $MYSQL_USER -p$MYSQL_PASSWORD -N -e &quot;SHOW BINARY LOGS&quot; | awk &#39;{print $1}&#39; | grep -v &quot;$CURRENT_BINLOG&quot;); do        if [ ! -f &quot;$BACKUP_DIR/$BINLOG&quot; ]; then            log &quot;备份二进制日志: $BINLOG&quot;            cp /var/lib/mysql/$BINLOG $BACKUP_DIR/                        # 验证备份            if cmp /var/lib/mysql/$BINLOG $BACKUP_DIR/$BINLOG; then                log &quot;备份验证成功: $BINLOG&quot;            else                log &quot;备份验证失败: $BINLOG&quot;            fi        fi    done}purge_old_backups() {    log &quot;清理过期备份（保留 $RETENTION_DAYS 天）&quot;    find $BACKUP_DIR -name &quot;mysql-bin.*&quot; -mtime +$RETENTION_DAYS -delete}# 创建备份目录mkdir -p $BACKUP_DIR# 执行备份backup_binlogpurge_old_backupslog &quot;二进制日志备份完成&quot;</code></pre><h3 id="%E5%A4%87%E4%BB%BD%E5%8E%8B%E7%BC%A9%E4%B8%8E%E5%8A%A0%E5%AF%86" tabindex="-1">备份压缩与加密</h3><p><strong>加密备份方案：</strong></p><pre><code class="language-bash">#!/bin/bash# encrypted_backup.shBACKUP_DIR=&quot;/backup/encrypted&quot;MYSQL_USER=&quot;backup_user&quot;MYSQL_PASSWORD=&quot;secure_password&quot;ENCRYPTION_KEY=&quot;/etc/mysql/backup.key&quot;LOG_FILE=&quot;/var/log/mysql/encrypted_backup.log&quot;log() {    echo &quot;$(date &#39;+%Y-%m-%d %H:%M:%S&#39;) - $1&quot; &gt;&gt; $LOG_FILE}generate_encryption_key() {    if [ ! -f &quot;$ENCRYPTION_KEY&quot; ]; then        log &quot;生成加密密钥&quot;        openssl rand -base64 32 &gt; $ENCRYPTION_KEY        chmod 600 $ENCRYPTION_KEY    fi}create_encrypted_backup() {    local backup_file=&quot;$BACKUP_DIR/backup_$(date +%Y%m%d_%H%M%S).xb.enc&quot;        log &quot;创建加密备份: $backup_file&quot;        # 使用XtraBackup创建备份并立即加密    xtrabackup --backup --user=$MYSQL_USER --password=$MYSQL_PASSWORD --stream=xbstream | \    openssl enc -aes-256-cbc -salt -pass file:$ENCRYPTION_KEY -out $backup_file        if [ $? -eq 0 ]; then        log &quot;加密备份创建成功: $backup_file&quot;    else        log &quot;加密备份创建失败&quot;        exit 1    fi}verify_encrypted_backup() {    local backup_file=$1        log &quot;验证加密备份: $backup_file&quot;        # 尝试解密备份头信息    if openssl enc -aes-256-cbc -d -pass file:$ENCRYPTION_KEY -in $backup_file | head -c 100 | strings | grep -q &quot;MySQL&quot;; then        log &quot;加密备份验证成功&quot;        return 0    else        log &quot;加密备份验证失败&quot;        return 1    fi}# 主逻辑generate_encryption_keymkdir -p $BACKUP_DIRcreate_encrypted_backup# 验证最新的备份LATEST_BACKUP=$(ls -t $BACKUP_DIR/*.enc | head -1)if [ -n &quot;$LATEST_BACKUP&quot; ]; then    verify_encrypted_backup $LATEST_BACKUPfi</code></pre><p><strong>压缩备份优化：</strong></p><pre><code class="language-bash">#!/bin/bash# compressed_backup.shBACKUP_DIR=&quot;/backup/compressed&quot;MYSQL_USER=&quot;backup_user&quot;MYSQL_PASSWORD=&quot;secure_password&quot;COMPRESSION_LEVEL=6  # 1-9，数字越大压缩率越高但速度越慢LOG_FILE=&quot;/var/log/mysql/compressed_backup.log&quot;log() {    echo &quot;$(date &#39;+%Y-%m-%d %H:%M:%S&#39;) - $1&quot; &gt;&gt; $LOG_FILE}create_compressed_backup() {    local backup_file=&quot;$BACKUP_DIR/backup_$(date +%Y%m%d_%H%M%S).sql.gz&quot;        log &quot;创建压缩备份 (级别: $COMPRESSION_LEVEL)&quot;        # 使用mysqldump和gzip创建压缩备份    mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD --all-databases --single-transaction --routines --triggers --events | \    gzip -$COMPRESSION_LEVEL &gt; $backup_file        local backup_size=$(du -h $backup_file | cut -f1)    log &quot;压缩备份完成: $backup_file (大小: $backup_size)&quot;}create_parallel_compressed_backup() {    local backup_file=&quot;$BACKUP_DIR/backup_$(date +%Y%m%d_%H%M%S).sql.gz&quot;        log &quot;创建并行压缩备份&quot;        # 使用pigz进行并行压缩（如果可用）    if command -v pigz &gt;/dev/null 2&gt;&amp;1; then        mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD --all-databases --single-transaction | \        pigz -p 4 -$COMPRESSION_LEVEL &gt; $backup_file    else        mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD --all-databases --single-transaction | \        gzip -$COMPRESSION_LEVEL &gt; $backup_file    fi        local backup_size=$(du -h $backup_file | cut -f1)    log &quot;并行压缩备份完成: $backup_file (大小: $backup_size)&quot;}# 创建备份目录mkdir -p $BACKUP_DIR# 根据系统资源选择备份方式if [ $(nproc) -gt 2 ]; then    create_parallel_compressed_backupelse    create_compressed_backupfi</code></pre><h3 id="%E4%BA%91%E7%8E%AF%E5%A2%83%E5%A4%87%E4%BB%BD%E6%96%B9%E6%A1%88" tabindex="-1">云环境备份方案</h3><p><strong>AWS S3备份方案：</strong></p><pre><code class="language-bash">#!/bin/bash# s3_backup.shBACKUP_DIR=&quot;/backup/s3_upload&quot;S3_BUCKET=&quot;my-company-mysql-backups&quot;S3_PATH=&quot;mysql/$(hostname)&quot;MYSQL_USER=&quot;backup_user&quot;MYSQL_PASSWORD=&quot;secure_password&quot;RETENTION_DAYS=30LOG_FILE=&quot;/var/log/mysql/s3_backup.log&quot;log() {    echo &quot;$(date &#39;+%Y-%m-%d %H:%M:%S&#39;) - $1&quot; &gt;&gt; $LOG_FILE}create_backup() {    local backup_file=&quot;$BACKUP_DIR/backup_$(date +%Y%m%d_%H%M%S).sql.gz&quot;        log &quot;创建备份文件: $backup_file&quot;        mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD --all-databases --single-transaction --routines --triggers --events | \    gzip &gt; $backup_file        echo $backup_file}upload_to_s3() {    local backup_file=$1    local s3_key=&quot;$S3_PATH/$(basename $backup_file)&quot;        log &quot;上传到S3: s3://$S3_BUCKET/$s3_key&quot;        if aws s3 cp $backup_file s3://$S3_BUCKET/$s3_key; then        log &quot;S3上传成功&quot;        return 0    else        log &quot;S3上传失败&quot;        return 1    fi}cleanup_old_backups() {    log &quot;清理本地过期备份&quot;    find $BACKUP_DIR -name &quot;*.sql.gz&quot; -mtime +7 -delete        log &quot;清理S3过期备份&quot;    aws s3 ls s3://$S3_BUCKET/$S3_PATH/ | while read line; do        create_date=$(echo $line | awk &#39;{print $1&quot; &quot;$2}&#39;)        create_date_epoch=$(date -d &quot;$create_date&quot; +%s)        retention_epoch=$(date -d &quot;$RETENTION_DAYS days ago&quot; +%s)                if [ $create_date_epoch -lt $retention_epoch ]; then            file_name=$(echo $line | awk &#39;{print $4}&#39;)            aws s3 rm s3://$S3_BUCKET/$S3_PATH/$file_name            log &quot;删除过期S3备份: $file_name&quot;        fi    done}verify_s3_backup() {    local backup_file=$1    local s3_key=&quot;$S3_PATH/$(basename $backup_file)&quot;        log &quot;验证S3备份完整性&quot;        # 下载备份文件    local temp_file=&quot;/tmp/verify_$(basename $backup_file)&quot;    aws s3 cp s3://$S3_BUCKET/$s3_key $temp_file        # 比较本地和S3的文件    if cmp $backup_file $temp_file; then        log &quot;S3备份验证成功&quot;        rm $temp_file        return 0    else        log &quot;S3备份验证失败&quot;        rm $temp_file        return 1    fi}# 主逻辑mkdir -p $BACKUP_DIRBACKUP_FILE=$(create_backup)if [ -n &quot;$BACKUP_FILE&quot; ]; then    if upload_to_s3 $BACKUP_FILE; then        verify_s3_backup $BACKUP_FILE    fificleanup_old_backups</code></pre><h2 id="2.-%E6%95%B0%E6%8D%AE%E6%81%A2%E5%A4%8D%E4%B8%8E%E7%81%BE%E9%9A%BE%E6%81%A2%E5%A4%8D" tabindex="-1">2. 数据恢复与灾难恢复</h2><h3 id="%E5%9F%BA%E4%BA%8E%E6%97%B6%E9%97%B4%E7%82%B9%E7%9A%84%E6%81%A2%E5%A4%8D%EF%BC%88pitr%EF%BC%89" tabindex="-1">基于时间点的恢复（PITR）</h3><p><strong>PITR恢复流程：</strong></p><pre><code class="language-bash">#!/bin/bash# pitr_recovery.shRESTORE_TIME=&quot;2023-12-01 14:30:00&quot;BACKUP_DIR=&quot;/backup&quot;BINLOG_DIR=&quot;/var/lib/mysql&quot;RECOVERY_DIR=&quot;/recovery&quot;MYSQL_DATA_DIR=&quot;/var/lib/mysql&quot;LOG_FILE=&quot;/var/log/mysql/pitr_recovery.log&quot;log() {    echo &quot;$(date &#39;+%Y-%m-%d %H:%M:%S&#39;) - $1&quot; &gt;&gt; $LOG_FILE}find_relevant_backup() {    log &quot;查找适用于时间点 $RESTORE_TIME 的备份&quot;        # 查找在恢复时间之前的最新完整备份    for BACKUP in $(ls -t $BACKUP_DIR/full_* 2&gt;/dev/null); do        local backup_time=$(stat -c %y $BACKUP/xtrabackup_info | cut -d&#39; &#39; -f1,2 | cut -d&#39;.&#39; -f1)        local backup_epoch=$(date -d &quot;$backup_time&quot; +%s)        local restore_epoch=$(date -d &quot;$RESTORE_TIME&quot; +%s)                if [ $backup_epoch -le $restore_epoch ]; then            echo $BACKUP            return 0        fi    done        log &quot;错误: 未找到合适的完整备份&quot;    exit 1}extract_binlog_events() {    local start_time=$1    local stop_time=$2    local output_file=$3        log &quot;提取二进制日志事件: $start_time 到 $stop_time&quot;        # 查找包含时间范围的二进制日志文件    for BINLOG in $(ls -tr $BINLOG_DIR/mysql-bin.* 2&gt;/dev/null | grep -v &#39;.index&#39;); do        local first_event_time=$(mysqlbinlog $BINLOG | grep -m1 &quot;end_log_pos&quot; | awk &#39;{print $1, $2}&#39; | tr -d &#39;#&#39;)        local last_event_time=$(mysqlbinlog $BINLOG | tail -10 | grep &quot;end_log_pos&quot; | tail -1 | awk &#39;{print $1, $2}&#39; | tr -d &#39;#&#39;)                if [ -n &quot;$first_event_time&quot; ] &amp;&amp; [ -n &quot;$last_event_time&quot; ]; then            local first_epoch=$(date -d &quot;$first_event_time&quot; +%s 2&gt;/dev/null || echo 0)            local last_epoch=$(date -d &quot;$last_event_time&quot; +%s 2&gt;/dev/null || echo 0)            local start_epoch=$(date -d &quot;$start_time&quot; +%s)            local stop_epoch=$(date -d &quot;$stop_time&quot; +%s)                        if [ $last_epoch -ge $start_epoch ] &amp;&amp; [ $first_epoch -le $stop_epoch ]; then                log &quot;处理二进制日志: $BINLOG&quot;                mysqlbinlog --start-datetime=&quot;$start_time&quot; --stop-datetime=&quot;$stop_time&quot; $BINLOG &gt;&gt; $output_file            fi        fi    done}perform_pitr_recovery() {    local base_backup=$1        log &quot;执行时间点恢复&quot;        # 准备恢复环境    systemctl stop mysql    mv $MYSQL_DATA_DIR ${MYSQL_DATA_DIR}_backup_$(date +%Y%m%d_%H%M%S)        # 恢复基础备份    xtrabackup --copy-back --target-dir=$base_backup    chown -R mysql:mysql $MYSQL_DATA_DIR        # 启动MySQL到恢复模式    systemctl start mysql        # 获取备份时间    local backup_time=$(stat -c %y $base_backup/xtrabackup_info | cut -d&#39; &#39; -f1,2 | cut -d&#39;.&#39; -f1)        # 提取和应用二进制日志    local binlog_events=&quot;/tmp/binlog_events.sql&quot;    echo &quot;&quot; &gt; $binlog_events        extract_binlog_events &quot;$backup_time&quot; &quot;$RESTORE_TIME&quot; $binlog_events        # 应用二进制日志事件    if [ -s $binlog_events ]; then        log &quot;应用二进制日志事件&quot;        mysql -u root -p&#39;password&#39; &lt; $binlog_events    else        log &quot;没有需要应用的二进制日志事件&quot;    fi        log &quot;时间点恢复完成&quot;}# 主逻辑BASE_BACKUP=$(find_relevant_backup)perform_pitr_recovery $BASE_BACKUP</code></pre><h3 id="%E8%AF%AF%E6%93%8D%E4%BD%9C%E6%95%B0%E6%8D%AE%E6%81%A2%E5%A4%8D%E6%96%B9%E6%A1%88" tabindex="-1">误操作数据恢复方案</h3><p><strong>Flashback工具使用：</strong></p><pre><code class="language-sql">-- 安装mysqlbinlog_flashback工具-- 使用my2sql或binlog2sql进行闪回-- 示例：恢复误删除的数据# 使用binlog2sql解析二进制日志python binlog2sql/binlog2sql.py -h127.0.0.1 -P3306 -uroot -p&#39;password&#39; -dcompany -temployees --start-file=&#39;mysql-bin.000001&#39; --start-pos=4 --stop-pos=1000 -B-- 输出闪回SQL/*INSERT INTO &#96;company&#96;.&#96;employees&#96;(&#96;create_time&#96;, &#96;phone&#96;, &#96;name&#96;, &#96;id&#96;, &#96;email&#96;) VALUES (&#39;2023-01-01 10:00:00&#39;, &#39;13800138000&#39;, &#39;张三&#39;, 1, &#39;zhangsan@company.com&#39;); INSERT INTO &#96;company&#96;.&#96;employees&#96;(&#96;create_time&#96;, &#96;phone&#96;, &#96;name&#96;, &#96;id&#96;, &#96;email&#96;) VALUES (&#39;2023-01-02 11:00:00&#39;, &#39;13900139000&#39;, &#39;李四&#39;, 2, &#39;lisi@company.com&#39;);*/</code></pre><p><strong>基于备份的误操作恢复：</strong></p><pre><code class="language-bash">#!/bin/bash# point_in_time_restore.shDB_NAME=&quot;company&quot;TABLE_NAME=&quot;employees&quot;BACKUP_DIR=&quot;/backup&quot;RESTORE_TIME=&quot;2023-12-01 10:00:00&quot;  # 误操作之前的时间LOG_FILE=&quot;/var/log/mysql/point_restore.log&quot;log() {    echo &quot;$(date &#39;+%Y-%m-%d %H:%M:%S&#39;) - $1&quot; &gt;&gt; $LOG_FILE}create_restore_database() {    local restore_db=&quot;${DB_NAME}_restore_$(date +%Y%m%d_%H%M%S)&quot;        log &quot;创建恢复数据库: $restore_db&quot;        mysql -u root -p&#39;password&#39; -e &quot;CREATE DATABASE $restore_db;&quot;    echo $restore_db}restore_table_to_point() {    local restore_db=$1    local backup_file=$(find $BACKUP_DIR -name &quot;*${DB_NAME}*&quot; -type f | sort -r | head -1)        if [ -z &quot;$backup_file&quot; ]; then        log &quot;错误: 未找到备份文件&quot;        exit 1    fi        log &quot;从备份恢复表结构&quot;        # 提取表结构    if [[ $backup_file == *.sql.gz ]]; then        gunzip -c $backup_file | sed -n &quot;/^-- Table structure for table \&#96;$TABLE_NAME\&#96;/,/^-- Table structure/p&quot; | \        mysql -u root -p&#39;password&#39; $restore_db    else        sed -n &quot;/^-- Table structure for table \&#96;$TABLE_NAME\&#96;/,/^-- Table structure/p&quot; $backup_file | \        mysql -u root -p&#39;password&#39; $restore_db    fi        # 应用二进制日志到指定时间点    log &quot;应用二进制日志到时间点: $RESTORE_TIME&quot;        local binlog_events=&quot;/tmp/binlog_events_$restore_db.sql&quot;    mysqlbinlog --database=$DB_NAME --stop-datetime=&quot;$RESTORE_TIME&quot; /var/lib/mysql/mysql-bin.* | \    sed -n &quot;/^### INSERT INTO \&#96;$DB_NAME\&#96;.\&#96;$TABLE_NAME\&#96;/,/^### INSERT INTO/p&quot; | \    sed &#39;s/^### //&#39; &gt; $binlog_events        mysql -u root -p&#39;password&#39; $restore_db &lt; $binlog_events    rm $binlog_events        log &quot;表恢复完成: $restore_db.$TABLE_NAME&quot;}compare_and_restore() {    local restore_db=$1        log &quot;比较并恢复数据&quot;        # 生成恢复SQL    local restore_sql=&quot;/tmp/restore_data.sql&quot;        cat &gt; $restore_sql &lt;&lt; EOF-- 插入缺失的记录INSERT INTO $DB_NAME.$TABLE_NAME SELECT * FROM $restore_db.$TABLE_NAME rWHERE NOT EXISTS (    SELECT 1 FROM $DB_NAME.$TABLE_NAME c     WHERE c.id = r.id);-- 更新被修改的记录UPDATE $DB_NAME.$TABLE_NAME cJOIN $restore_db.$TABLE_NAME r ON c.id = r.idSET     c.name = r.name,    c.email = r.email,    c.phone = r.phone,    c.updated_at = NOW()WHERE c.name != r.name    OR c.email != r.email    OR c.phone != r.phone;EOF    mysql -u root -p&#39;password&#39; &lt; $restore_sql    rm $restore_sql        log &quot;数据恢复完成&quot;}# 主逻辑RESTORE_DB=$(create_restore_database)restore_table_to_point $RESTORE_DBcompare_and_restore $RESTORE_DB# 清理恢复数据库mysql -u root -p&#39;password&#39; -e &quot;DROP DATABASE $RESTORE_DB;&quot;</code></pre><h3 id="%E4%B8%BB%E4%BB%8E%E5%88%87%E6%8D%A2%E4%B8%8E%E6%95%B0%E6%8D%AE%E9%87%8D%E5%BB%BA" tabindex="-1">主从切换与数据重建</h3><p><strong>计划内主从切换：</strong></p><pre><code class="language-bash">#!/bin/bash# planned_failover.shCURRENT_MASTER=&quot;192.168.1.100&quot;NEW_MASTER=&quot;192.168.1.101&quot;MYSQL_USER=&quot;repl_user&quot;MYSQL_PASSWORD=&quot;repl_password&quot;LOG_FILE=&quot;/var/log/mysql/planned_failover.log&quot;log() {    echo &quot;$(date &#39;+%Y-%m-%d %H:%M:%S&#39;) - $1&quot; &gt;&gt; $LOG_FILE}check_replication_health() {    log &quot;检查复制健康状况&quot;        # 检查主库    local master_status=$(mysql -h $CURRENT_MASTER -u $MYSQL_USER -p$MYSQL_PASSWORD -e &quot;SHOW MASTER STATUS\G&quot;)    if [ $? -ne 0 ]; then        log &quot;错误: 无法连接主库 $CURRENT_MASTER&quot;        exit 1    fi        # 检查从库延迟    local slave_status=$(mysql -h $NEW_MASTER -u $MYSQL_USER -p$MYSQL_PASSWORD -e &quot;SHOW SLAVE STATUS\G&quot;)    local seconds_behind=$(echo &quot;$slave_status&quot; | grep &quot;Seconds_Behind_Master&quot; | awk &#39;{print $2}&#39;)        if [ &quot;$seconds_behind&quot; != &quot;0&quot; ]; then        log &quot;警告: 从库有延迟 ($seconds_behind 秒)&quot;        read -p &quot;是否继续? (y/n): &quot; -n 1 -r        echo        if [[ ! $REPLY =~ ^[Yy]$ ]]; then            exit 1        fi    fi        log &quot;复制健康状况良好&quot;}perform_failover() {    log &quot;开始主从切换&quot;        # 1. 设置原主库为只读    log &quot;设置原主库为只读模式&quot;    mysql -h $CURRENT_MASTER -u $MYSQL_USER -p$MYSQL_PASSWORD -e &quot;SET GLOBAL read_only = ON;&quot;        # 2. 等待从库应用所有日志    log &quot;等待从库应用所有日志&quot;    while true; do        local slave_status=$(mysql -h $NEW_MASTER -u $MYSQL_USER -p$MYSQL_PASSWORD -e &quot;SHOW SLAVE STATUS\G&quot;)        local seconds_behind=$(echo &quot;$slave_status&quot; | grep &quot;Seconds_Behind_Master&quot; | awk &#39;{print $2}&#39;)        local io_running=$(echo &quot;$slave_status&quot; | grep &quot;Slave_IO_Running&quot; | awk &#39;{print $2}&#39;)        local sql_running=$(echo &quot;$slave_status&quot; | grep &quot;Slave_SQL_Running&quot; | awk &#39;{print $2}&#39;)                if [ &quot;$seconds_behind&quot; = &quot;0&quot; ] &amp;&amp; [ &quot;$io_running&quot; = &quot;Yes&quot; ] &amp;&amp; [ &quot;$sql_running&quot; = &quot;Yes&quot; ]; then            break        fi        sleep 1    done        # 3. 停止从库复制    log &quot;停止新主库的复制&quot;    mysql -h $NEW_MASTER -u $MYSQL_USER -p$MYSQL_PASSWORD -e &quot;STOP SLAVE;&quot;        # 4. 记录新主库的二进制日志位置    local new_master_status=$(mysql -h $NEW_MASTER -u $MYSQL_USER -p$MYSQL_PASSWORD -e &quot;SHOW MASTER STATUS\G&quot;)    local new_master_file=$(echo &quot;$new_master_status&quot; | grep &quot;File&quot; | awk &#39;{print $2}&#39;)    local new_master_position=$(echo &quot;$new_master_status&quot; | grep &quot;Position&quot; | awk &#39;{print $2}&#39;)        # 5. 设置新主库为可写    log &quot;设置新主库为可写模式&quot;    mysql -h $NEW_MASTER -u $MYSQL_USER -p$MYSQL_PASSWORD -e &quot;SET GLOBAL read_only = OFF;&quot;        # 6. 配置其他从库指向新主库    log &quot;重新配置其他从库&quot;    # 这里可以添加其他从库的重新配置逻辑        log &quot;主从切换完成&quot;    log &quot;新主库二进制日志位置: $new_master_file $new_master_position&quot;}# 主逻辑check_replication_healthperform_failover</code></pre><h3 id="%E7%81%BE%E9%9A%BE%E6%81%A2%E5%A4%8D%E6%BC%94%E7%BB%83" tabindex="-1">灾难恢复演练</h3><p><strong>完整灾难恢复演练：</strong></p><pre><code class="language-bash">#!/bin/bash# disaster_recovery_test.shDR_SITE_MYSQL=&quot;192.168.2.100&quot;BACKUP_SERVER=&quot;192.168.3.100&quot;MYSQL_USER=&quot;dr_user&quot;MYSQL_PASSWORD=&quot;dr_password&quot;LOG_FILE=&quot;/var/log/mysql/dr_test.log&quot;log() {    echo &quot;$(date &#39;+%Y-%m-%d %H:%M:%S&#39;) - $1&quot; &gt;&gt; $LOG_FILE}verify_dr_environment() {    log &quot;验证灾备环境&quot;        # 检查网络连通性    if ! ping -c 3 $DR_SITE_MYSQL &gt; /dev/null 2&gt;&amp;1; then        log &quot;错误: 无法连接到灾备MySQL服务器&quot;        return 1    fi        # 检查MySQL服务    if ! mysql -h $DR_SITE_MYSQL -u $MYSQL_USER -p$MYSQL_PASSWORD -e &quot;SELECT 1;&quot; &gt; /dev/null 2&gt;&amp;1; then        log &quot;错误: 灾备MySQL服务不可用&quot;        return 1    fi        log &quot;灾备环境验证通过&quot;    return 0}restore_to_dr_site() {    log &quot;开始恢复到灾备站点&quot;        # 1. 停止灾备站点MySQL服务    log &quot;停止灾备站点MySQL服务&quot;    ssh root@$DR_SITE_MYSQL &quot;systemctl stop mysql&quot;        # 2. 备份当前数据    log &quot;备份灾备站点当前数据&quot;    ssh root@$DR_SITE_MYSQL &quot;mv /var/lib/mysql /var/lib/mysql_backup_$(date +%Y%m%d_%H%M%S)&quot;        # 3. 从备份服务器获取最新备份    log &quot;获取最新备份&quot;    local latest_backup=$(ssh root@$BACKUP_SERVER &quot;ls -t /backup/full_* | head -1&quot;)        if [ -z &quot;$latest_backup&quot; ]; then        log &quot;错误: 未找到备份文件&quot;        return 1    fi        # 4. 传输备份到灾备站点    log &quot;传输备份文件&quot;    scp -r root@$BACKUP_SERVER:$latest_backup /tmp/dr_restore/        # 5. 准备备份    log &quot;准备备份&quot;    ssh root@$DR_SITE_MYSQL &quot;xtrabackup --prepare --target-dir=/tmp/dr_restore/&quot;        # 6. 恢复备份    log &quot;恢复备份&quot;    ssh root@$DR_SITE_MYSQL &quot;xtrabackup --copy-back --target-dir=/tmp/dr_restore/&quot;        # 7. 设置权限并启动服务    log &quot;启动MySQL服务&quot;    ssh root@$DR_SITE_MYSQL &quot;chown -R mysql:mysql /var/lib/mysql &amp;&amp; systemctl start mysql&quot;        log &quot;灾备恢复完成&quot;}verify_dr_data() {    log &quot;验证灾备数据&quot;        # 检查数据库列表    local db_count=$(mysql -h $DR_SITE_MYSQL -u $MYSQL_USER -p$MYSQL_PASSWORD -N -e &quot;SELECT COUNT(*) FROM information_schema.tables WHERE table_schema NOT IN (&#39;mysql&#39;,&#39;information_schema&#39;,&#39;performance_schema&#39;,&#39;sys&#39;);&quot;)        if [ &quot;$db_count&quot; -gt 0 ]; then        log &quot;数据验证成功: 发现 $db_count 个用户表&quot;        return 0    else        log &quot;数据验证失败: 未发现用户表&quot;        return 1    fi}perform_failover_test() {    log &quot;执行故障切换测试&quot;        # 模拟应用连接灾备数据库    local test_result=$(mysql -h $DR_SITE_MYSQL -u $MYSQL_USER -p$MYSQL_PASSWORD -e &quot;CREATE DATABASE dr_test; USE dr_test; CREATE TABLE test_table (id INT); INSERT INTO test_table VALUES (1); SELECT * FROM test_table;&quot; 2&gt;&amp;1)        if echo &quot;$test_result&quot; | grep -q &quot;1&quot;; then        log &quot;故障切换测试成功&quot;                # 清理测试数据        mysql -h $DR_SITE_MYSQL -u $MYSQL_USER -p$MYSQL_PASSWORD -e &quot;DROP DATABASE dr_test;&quot;                return 0    else        log &quot;故障切换测试失败&quot;        return 1    fi}# 主逻辑if verify_dr_environment; then    restore_to_dr_site    if verify_dr_data; then        perform_failover_test    fifi</code></pre><h3 id="%E5%A4%87%E4%BB%BD%E6%81%A2%E5%A4%8D%E7%9B%91%E6%8E%A7%E5%91%8A%E8%AD%A6" tabindex="-1">备份恢复监控告警</h3><p><strong>备份状态监控：</strong></p><pre><code class="language-sql">-- 创建备份监控表CREATE TABLE backup_monitor (    id BIGINT AUTO_INCREMENT PRIMARY KEY,    backup_type ENUM(&#39;FULL&#39;, &#39;INCREMENTAL&#39;, &#39;BINLOG&#39;) NOT NULL,    backup_file VARCHAR(500) NOT NULL,    backup_size BIGINT,    start_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP,    end_time TIMESTAMP NULL,    status ENUM(&#39;RUNNING&#39;, &#39;COMPLETED&#39;, &#39;FAILED&#39;) DEFAULT &#39;RUNNING&#39;,    error_message TEXT,    checksum VARCHAR(64));-- 创建备份告警表CREATE TABLE backup_alerts (    id BIGINT AUTO_INCREMENT PRIMARY KEY,    alert_type VARCHAR(50) NOT NULL,    alert_message TEXT NOT NULL,    severity ENUM(&#39;LOW&#39;, &#39;MEDIUM&#39;, &#39;HIGH&#39;, &#39;CRITICAL&#39;) NOT NULL,    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,    resolved_at TIMESTAMP NULL,    resolved_by VARCHAR(100));-- 备份状态检查存储过程DELIMITER //CREATE PROCEDURE CheckBackupStatus()BEGIN    DECLARE last_full_backup TIMESTAMP;    DECLARE backup_age_hours INT;    DECLARE failed_backups INT;        -- 检查最近完整备份的时间    SELECT MAX(start_time) INTO last_full_backup    FROM backup_monitor    WHERE backup_type = &#39;FULL&#39; AND status = &#39;COMPLETED&#39;;        SET backup_age_hours = TIMESTAMPDIFF(HOUR, last_full_backup, NOW());        -- 如果超过24小时没有完整备份，发出告警    IF backup_age_hours &gt; 24 THEN        INSERT INTO backup_alerts (alert_type, alert_message, severity)        VALUES (&#39;BACKUP_MISSING&#39;,                 CONCAT(&#39;超过&#39;, backup_age_hours, &#39;小时没有完整备份&#39;),                 &#39;HIGH&#39;);    END IF;        -- 检查失败的备份    SELECT COUNT(*) INTO failed_backups    FROM backup_monitor    WHERE status = &#39;FAILED&#39; AND start_time &gt; NOW() - INTERVAL 24 HOUR;        IF failed_backups &gt; 0 THEN        INSERT INTO backup_alerts (alert_type, alert_message, severity)        VALUES (&#39;BACKUP_FAILED&#39;,                 CONCAT(&#39;过去24小时有&#39;, failed_backups, &#39;个备份失败&#39;),                 &#39;HIGH&#39;);    END IF;    END //DELIMITER ;</code></pre><h2 id="3.-%E5%AE%89%E5%85%A8%E4%B8%8E%E6%9D%83%E9%99%90%E7%AE%A1%E7%90%86" tabindex="-1">3. 安全与权限管理</h2><h3 id="%E7%94%A8%E6%88%B7%E6%9D%83%E9%99%90%E4%BD%93%E7%B3%BB%E8%AE%BE%E8%AE%A1" tabindex="-1">用户权限体系设计</h3><p><strong>最小权限原则实施：</strong></p><pre><code class="language-sql">-- 创建应用用户（遵循最小权限原则）CREATE USER &#39;app_readonly&#39;@&#39;192.168.1.%&#39; IDENTIFIED BY &#39;secure_password_123&#39;;GRANT SELECT ON company.* TO &#39;app_readonly&#39;@&#39;192.168.1.%&#39;;CREATE USER &#39;app_readwrite&#39;@&#39;192.168.1.%&#39; IDENTIFIED BY &#39;secure_password_456&#39;;GRANT SELECT, INSERT, UPDATE, DELETE ON company.* TO &#39;app_readwrite&#39;@&#39;192.168.1.%&#39;;CREATE USER &#39;app_report&#39;@&#39;192.168.1.%&#39; IDENTIFIED BY &#39;secure_password_789&#39;;GRANT SELECT ON company.employees TO &#39;app_report&#39;@&#39;192.168.1.%&#39;;GRANT SELECT ON company.departments TO &#39;app_report&#39;@&#39;192.168.1.%&#39;;-- 创建管理用户CREATE USER &#39;db_admin&#39;@&#39;localhost&#39; IDENTIFIED BY &#39;admin_secure_password&#39;;GRANT ALL PRIVILEGES ON *.* TO &#39;db_admin&#39;@&#39;localhost&#39; WITH GRANT OPTION;-- 创建备份用户CREATE USER &#39;backup_user&#39;@&#39;localhost&#39; IDENTIFIED BY &#39;backup_secure_password&#39;;GRANT SELECT, RELOAD, PROCESS, LOCK TABLES, REPLICATION CLIENT ON *.* TO &#39;backup_user&#39;@&#39;localhost&#39;;-- 查看用户权限SHOW GRANTS FOR &#39;app_readonly&#39;@&#39;192.168.1.%&#39;;</code></pre><p><strong>数据库权限审计：</strong></p><pre><code class="language-sql">-- 创建权限审计表CREATE TABLE privilege_audit (    id BIGINT AUTO_INCREMENT PRIMARY KEY,    username VARCHAR(100) NOT NULL,    host_pattern VARCHAR(100) NOT NULL,    database_name VARCHAR(100),    table_name VARCHAR(100),    privilege_type VARCHAR(50) NOT NULL,    granted_by VARCHAR(100),    granted_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,    is_revoked BOOLEAN DEFAULT FALSE,    revoked_at TIMESTAMP NULL,    revoked_by VARCHAR(100));-- 权限审计存储过程DELIMITER //CREATE PROCEDURE AuditUserPrivileges()BEGIN    DECLARE done INT DEFAULT 0;    DECLARE v_user, v_host, v_db, v_table, v_privilege VARCHAR(100);    DECLARE cur CURSOR FOR         SELECT User, Host, Db, Table_name, Privilege         FROM information_schema.table_privileges;    DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;        OPEN cur;        read_loop: LOOP        FETCH cur INTO v_user, v_host, v_db, v_table, v_privilege;        IF done THEN            LEAVE read_loop;        END IF;                -- 检查权限是否已经记录        IF NOT EXISTS (            SELECT 1 FROM privilege_audit             WHERE username = v_user               AND host_pattern = v_host               AND database_name = v_db               AND table_name = v_table               AND privilege_type = v_privilege               AND is_revoked = FALSE        ) THEN            -- 记录新权限            INSERT INTO privilege_audit (username, host_pattern, database_name, table_name, privilege_type)            VALUES (v_user, v_host, v_db, v_table, v_privilege);        END IF;    END LOOP;        CLOSE cur;        -- 标记已撤销的权限    UPDATE privilege_audit pa    LEFT JOIN information_schema.table_privileges tp         ON pa.username = tp.User         AND pa.host_pattern = tp.Host         AND pa.database_name = tp.Db         AND pa.table_name = tp.Table_name         AND pa.privilege_type = tp.Privilege    SET pa.is_revoked = TRUE,        pa.revoked_at = CURRENT_TIMESTAMP    WHERE pa.is_revoked = FALSE      AND tp.User IS NULL;    END //DELIMITER ;</code></pre><h3 id="%E8%A7%92%E8%89%B2%E7%AE%A1%E7%90%86%E4%B8%8E%E6%9D%83%E9%99%90%E7%BB%A7%E6%89%BF" tabindex="-1">角色管理与权限继承</h3><p><strong>MySQL 8.0角色管理：</strong></p><pre><code class="language-sql">-- 创建角色CREATE ROLE read_only_role;CREATE ROLE read_write_role;CREATE ROLE dba_role;-- 为角色分配权限GRANT SELECT ON company.* TO read_only_role;GRANT SELECT, INSERT, UPDATE, DELETE ON company.* TO read_write_role;GRANT ALL PRIVILEGES ON *.* TO dba_role;-- 创建用户并分配角色CREATE USER &#39;report_user&#39;@&#39;%&#39; IDENTIFIED BY &#39;report_password&#39;;CREATE USER &#39;app_user&#39;@&#39;%&#39; IDENTIFIED BY &#39;app_password&#39;;CREATE USER &#39;admin_user&#39;@&#39;localhost&#39; IDENTIFIED BY &#39;admin_password&#39;;-- 分配角色给用户GRANT read_only_role TO &#39;report_user&#39;@&#39;%&#39;;GRANT read_write_role TO &#39;app_user&#39;@&#39;%&#39;;GRANT dba_role TO &#39;admin_user&#39;@&#39;localhost&#39;;-- 设置默认角色SET DEFAULT ROLE read_only_role TO &#39;report_user&#39;@&#39;%&#39;;SET DEFAULT ROLE read_write_role TO &#39;app_user&#39;@&#39;%&#39;;SET DEFAULT ROLE dba_role TO &#39;admin_user&#39;@&#39;localhost&#39;;-- 激活角色SET ROLE ALL;-- 查看角色权限SHOW GRANTS FOR &#39;report_user&#39;@&#39;%&#39; USING read_only_role;-- 创建层次化角色CREATE ROLE junior_dba;CREATE ROLE senior_dba;GRANT junior_dba TO senior_dba;GRANT SELECT, INSERT, UPDATE, DELETE ON mysql.* TO junior_dba;GRANT ALL PRIVILEGES ON *.* TO senior_dba;</code></pre><p><strong>动态权限管理：</strong></p><pre><code class="language-sql">-- 创建存储过程管理用户权限DELIMITER //CREATE PROCEDURE ManageUserAccess(    IN p_username VARCHAR(100),    IN p_host_pattern VARCHAR(100),    IN p_database_name VARCHAR(100),    IN p_action ENUM(&#39;GRANT_READ&#39;, &#39;GRANT_WRITE&#39;, &#39;REVOKE_ACCESS&#39;))BEGIN    DECLARE user_exists INT;        -- 检查用户是否存在    SELECT COUNT(*) INTO user_exists    FROM mysql.user     WHERE User = p_username AND Host = p_host_pattern;        IF user_exists = 0 THEN        SIGNAL SQLSTATE &#39;45000&#39; SET MESSAGE_TEXT = &#39;用户不存在&#39;;    END IF;        CASE p_action        WHEN &#39;GRANT_READ&#39; THEN            SET @grant_sql = CONCAT(&#39;GRANT SELECT ON &#39;, p_database_name, &#39;.* TO &#39;&#39;&#39;, p_username, &#39;&#39;&#39;@&#39;&#39;&#39;, p_host_pattern, &#39;&#39;&#39;&#39;);            PREPARE stmt FROM @grant_sql;            EXECUTE stmt;            DEALLOCATE PREPARE stmt;                        -- 记录权限变更            INSERT INTO privilege_audit (username, host_pattern, database_name, privilege_type)            VALUES (p_username, p_host_pattern, p_database_name, &#39;SELECT&#39;);                    WHEN &#39;GRANT_WRITE&#39; THEN            SET @grant_sql = CONCAT(&#39;GRANT SELECT, INSERT, UPDATE, DELETE ON &#39;, p_database_name, &#39;.* TO &#39;&#39;&#39;, p_username, &#39;&#39;&#39;@&#39;&#39;&#39;, p_host_pattern, &#39;&#39;&#39;&#39;);            PREPARE stmt FROM @grant_sql;            EXECUTE stmt;            DEALLOCATE PREPARE stmt;                        INSERT INTO privilege_audit (username, host_pattern, database_name, privilege_type)            VALUES (p_username, p_host_pattern, p_database_name, &#39;READ_WRITE&#39;);                    WHEN &#39;REVOKE_ACCESS&#39; THEN            SET @revoke_sql = CONCAT(&#39;REVOKE ALL PRIVILEGES ON &#39;, p_database_name, &#39;.* FROM &#39;&#39;&#39;, p_username, &#39;&#39;&#39;@&#39;&#39;&#39;, p_host_pattern, &#39;&#39;&#39;&#39;);            PREPARE stmt FROM @revoke_sql;            EXECUTE stmt;            DEALLOCATE PREPARE stmt;                        UPDATE privilege_audit             SET is_revoked = TRUE, revoked_at = NOW()            WHERE username = p_username               AND host_pattern = p_host_pattern               AND database_name = p_database_name              AND is_revoked = FALSE;    END CASE;    END //DELIMITER ;</code></pre><h3 id="%E6%95%B0%E6%8D%AE%E5%8A%A0%E5%AF%86%EF%BC%9A%E9%80%8F%E6%98%8E%E5%8A%A0%E5%AF%86%E4%B8%8E%E5%88%97%E5%8A%A0%E5%AF%86" tabindex="-1">数据加密：透明加密与列加密</h3><p><strong>InnoDB表空间加密：</strong></p><pre><code class="language-sql">-- 安装密钥环组件（MySQL 8.0）INSTALL COMPONENT &quot;file://component_keyring_file&quot;;SET GLOBAL keyring_file_data = &#39;/var/lib/mysql-keyring/keyring&#39;;-- 创建加密表空间CREATE TABLESPACE encrypted_ts ADD DATAFILE &#39;encrypted_ts.ibd&#39; ENGINE=InnoDBENCRYPTION=&#39;Y&#39;;-- 在加密表空间中创建表CREATE TABLE sensitive_data (    id INT PRIMARY KEY,    secret_data VARCHAR(500)) TABLESPACE encrypted_ts;-- 加密现有表ALTER TABLE existing_sensitive_table ENCRYPTION=&#39;Y&#39;;-- 查看加密状态SELECT     TABLE_SCHEMA,    TABLE_NAME,    CREATE_OPTIONSFROM information_schema.TABLES WHERE CREATE_OPTIONS LIKE &#39;%ENCRYPTION%&#39;;</code></pre><p><strong>列级加密：</strong></p><pre><code class="language-sql">-- 创建加密函数DELIMITER //CREATE FUNCTION aes_encrypt(data TEXT, key_str VARCHAR(255))RETURNS VARBINARY(500)DETERMINISTICBEGIN    RETURN AES_ENCRYPT(data, key_str);END //CREATE FUNCTION aes_decrypt(encrypted_data VARBINARY(500), key_str VARCHAR(255))RETURNS TEXTDETERMINISTICBEGIN    RETURN AES_DECRYPT(encrypted_data, key_str);END //DELIMITER ;-- 创建存储加密数据的表CREATE TABLE user_secrets (    user_id INT PRIMARY KEY,    -- 加密存储的敏感数据    ssn VARBINARY(500),    credit_card VARBINARY(500),    medical_info VARBINARY(500),    -- 加密密钥（在实际应用中应该安全存储）    encryption_key VARCHAR(255) DEFAULT &#39;default_encryption_key&#39;);-- 插入加密数据INSERT INTO user_secrets (user_id, ssn, credit_card)VALUES (    1,    aes_encrypt(&#39;123-45-6789&#39;, &#39;user1_key&#39;),    aes_encrypt(&#39;4111111111111111&#39;, &#39;user1_key&#39;));-- 查询解密数据SELECT     user_id,    aes_decrypt(ssn, &#39;user1_key&#39;) as decrypted_ssn,    aes_decrypt(credit_card, &#39;user1_key&#39;) as decrypted_credit_cardFROM user_secrets WHERE user_id = 1;</code></pre><p><strong>密钥管理策略：</strong></p><pre><code class="language-sql">-- 创建密钥管理表CREATE TABLE encryption_keys (    key_id VARCHAR(100) PRIMARY KEY,    key_value VARBINARY(500) NOT NULL,    key_type ENUM(&#39;COLUMN&#39;, &#39;TABLE&#39;, &#39;BACKUP&#39;) NOT NULL,    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,    created_by VARCHAR(100),    is_active BOOLEAN DEFAULT TRUE,    rotated_at TIMESTAMP NULL);-- 密钥轮换存储过程DELIMITER //CREATE PROCEDURE RotateEncryptionKey(    IN p_key_id VARCHAR(100),    IN p_new_key_value VARBINARY(500))BEGIN    DECLARE old_key_value VARBINARY(500);    DECLARE done INT DEFAULT 0;    DECLARE v_user_id INT;    DECLARE v_ssn, v_credit_card VARBINARY(500);        -- 获取旧密钥    SELECT key_value INTO old_key_value    FROM encryption_keys    WHERE key_id = p_key_id AND is_active = TRUE;        IF old_key_value IS NULL THEN        SIGNAL SQLSTATE &#39;45000&#39; SET MESSAGE_TEXT = &#39;未找到活动的密钥&#39;;    END IF;        -- 使用游标处理所有需要重新加密的数据    DECLARE cur CURSOR FOR         SELECT user_id, ssn, credit_card         FROM user_secrets;    DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;        OPEN cur;        read_loop: LOOP        FETCH cur INTO v_user_id, v_ssn, v_credit_card;        IF done THEN            LEAVE read_loop;        END IF;                -- 解密并使用新密钥重新加密        UPDATE user_secrets         SET ssn = aes_encrypt(aes_decrypt(v_ssn, old_key_value), p_new_key_value),            credit_card = aes_encrypt(aes_decrypt(v_credit_card, old_key_value), p_new_key_value)        WHERE user_id = v_user_id;    END LOOP;        CLOSE cur;        -- 停用旧密钥，激活新密钥    UPDATE encryption_keys SET is_active = FALSE, rotated_at = NOW() WHERE key_id = p_key_id;    INSERT INTO encryption_keys (key_id, key_value, key_type) VALUES (p_key_id, p_new_key_value, &#39;COLUMN&#39;);    END //DELIMITER ;</code></pre><h3 id="%E5%AE%A1%E8%AE%A1%E6%97%A5%E5%BF%97%E4%B8%8E%E5%AE%89%E5%85%A8%E7%9B%91%E6%8E%A7" tabindex="-1">审计日志与安全监控</h3><p><strong>MySQL企业版审计：</strong></p><pre><code class="language-sql">-- 安装审计插件（企业版）INSTALL PLUGIN audit_log SONAME &#39;audit_log.so&#39;;-- 配置审计日志（在my.cnf中）/*[mysqld]audit_log_format=JSONaudit_log_file=/var/log/mysql/audit.logaudit_log_policy=ALLaudit_log_rotate_on_size=100000000audit_log_rotations=5*/-- 查看审计日志状态SHOW VARIABLES LIKE &#39;audit_log%&#39;;-- 查询审计日志SELECT     JSON_EXTRACT(audit_record, &#39;$.timestamp&#39;) as timestamp,    JSON_EXTRACT(audit_record, &#39;$.class&#39;) as event_class,    JSON_EXTRACT(audit_record, &#39;$.event&#39;) as event_type,    JSON_EXTRACT(audit_record, &#39;$.connection_id&#39;) as connection_id,    JSON_EXTRACT(audit_record, &#39;$.user&#39;) as user,    JSON_EXTRACT(audit_record, &#39;$.query&#39;) as queryFROM mysql.audit_log WHERE JSON_EXTRACT(audit_record, &#39;$.query&#39;) IS NOT NULLORDER BY timestamp DESC LIMIT 10;</code></pre><p><strong>社区版审计方案：</strong></p><pre><code class="language-sql">-- 使用通用日志实现基础审计SET GLOBAL general_log = 1;SET GLOBAL log_output = &#39;TABLE&#39;;-- 创建自定义审计表CREATE TABLE custom_audit_log (    id BIGINT AUTO_INCREMENT PRIMARY KEY,    event_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP,    user_host VARCHAR(200) NOT NULL,    thread_id BIGINT NOT NULL,    server_id INT NOT NULL,    command_type VARCHAR(64) NOT NULL,    argument TEXT NOT NULL,    client_ip VARCHAR(45),    database_name VARCHAR(100),    execution_time DECIMAL(10,6),    rows_affected INT);-- 审计触发器示例DELIMITER //CREATE TRIGGER audit_user_changesAFTER INSERT ON mysql.userFOR EACH ROWBEGIN    INSERT INTO custom_audit_log (user_host, thread_id, server_id, command_type, argument, client_ip)    VALUES (USER(), CONNECTION_ID(), @@server_id, &#39;CREATE_USER&#39;,             CONCAT(&#39;Created user: &#39;, NEW.User, &#39;@&#39;, NEW.Host),             SUBSTRING_INDEX(USER(), &#39;@&#39;, -1));END //CREATE TRIGGER audit_privilege_changesAFTER INSERT ON mysql.dbFOR EACH ROWBEGIN    INSERT INTO custom_audit_log (user_host, thread_id, server_id, command_type, argument, database_name)    VALUES (USER(), CONNECTION_ID(), @@server_id, &#39;GRANT_PRIVILEGE&#39;,            CONCAT(&#39;Granted privileges on &#39;, NEW.Db, &#39; to &#39;, NEW.User),            NEW.Db);END //DELIMITER ;</code></pre><p><strong>安全监控仪表板：</strong></p><pre><code class="language-sql">-- 创建安全监控视图CREATE VIEW security_dashboard ASSELECT     &#39;Failed Logins&#39; as metric_name,    COUNT(*) as metric_value,    MAX(event_time) as last_occurrenceFROM custom_audit_logWHERE argument LIKE &#39;%Access denied%&#39;  AND event_time &gt; NOW() - INTERVAL 1 HOURUNION ALLSELECT     &#39;New Users Created&#39; as metric_name,    COUNT(*) as metric_value,    MAX(event_time) as last_occurrenceFROM custom_audit_logWHERE command_type = &#39;CREATE_USER&#39;  AND event_time &gt; NOW() - INTERVAL 24 HOURUNION ALLSELECT     &#39;Privilege Changes&#39; as metric_name,    COUNT(*) as metric_value,    MAX(event_time) as last_occurrenceFROM custom_audit_logWHERE command_type IN (&#39;GRANT_PRIVILEGE&#39;, &#39;REVOKE_PRIVILEGE&#39;)  AND event_time &gt; NOW() - INTERVAL 24 HOURUNION ALLSELECT     &#39;Sensitive Data Access&#39; as metric_name,    COUNT(*) as metric_value,    MAX(event_time) as last_occurrenceFROM custom_audit_logWHERE argument LIKE &#39;%user_secrets%&#39;  AND event_time &gt; NOW() - INTERVAL 1 HOUR;</code></pre><h3 id="sql%E6%B3%A8%E5%85%A5%E9%98%B2%E6%8A%A4%E4%B8%8E%E5%AE%89%E5%85%A8%E5%BC%80%E5%8F%91" tabindex="-1">SQL注入防护与安全开发</h3><p><strong>预处理语句使用：</strong></p><pre><code class="language-sql">-- 不安全的查询（容易SQL注入）SET @user_input = &quot;1&#39;; DROP TABLE users; --&quot;;SET @sql = CONCAT(&quot;SELECT * FROM users WHERE id = &#39;&quot;, @user_input, &quot;&#39;&quot;);PREPARE stmt FROM @sql;EXECUTE stmt;-- 安全的预处理语句PREPARE safe_stmt FROM &quot;SELECT * FROM users WHERE id = ?&quot;;SET @user_id = &quot;1&quot;;EXECUTE safe_stmt USING @user_id;-- 存储过程参数化查询DELIMITER //CREATE PROCEDURE GetUserByEmail(IN p_email VARCHAR(255))BEGIN    -- 直接使用参数，避免拼接    SELECT * FROM users WHERE email = p_email;END //DELIMITER ;</code></pre><p><strong>输入验证函数：</strong></p><pre><code class="language-sql">DELIMITER //CREATE FUNCTION ValidateEmail(email VARCHAR(255))RETURNS BOOLEANDETERMINISTICBEGIN    -- 简单的邮箱格式验证    IF email REGEXP &#39;^[A-Za-z0-9._%-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,4}$&#39; THEN        RETURN TRUE;    ELSE        RETURN FALSE;    END IF;END //CREATE FUNCTION SanitizeInput(input_text TEXT)RETURNS TEXTDETERMINISTICBEGIN    -- 移除潜在的SQL注入字符    SET input_text = REPLACE(input_text, &quot;&#39;&quot;, &quot;&#39;&#39;&quot;);    SET input_text = REPLACE(input_text, &quot;;&quot;, &quot;&quot;);    SET input_text = REPLACE(input_text, &quot;--&quot;, &quot;&quot;);    SET input_text = REPLACE(input_text, &quot;/*&quot;, &quot;&quot;);    SET input_text = REPLACE(input_text, &quot;*/&quot;, &quot;&quot;);        RETURN input_text;END //DELIMITER ;</code></pre><p><strong>安全开发规范检查：</strong></p><pre><code class="language-sql">-- 检查存储过程的安全问题SELECT     ROUTINE_NAME,    ROUTINE_DEFINITIONFROM information_schema.ROUTINESWHERE ROUTINE_DEFINITION LIKE &#39;%CONCAT(%&#39;   OR ROUTINE_DEFINITION LIKE &#39;%EXECUTE%IMMEDIATE%&#39;   OR ROUTINE_DEFINITION LIKE &#39;%PREPARE%&#39;   OR ROUTINE_DEFINITION LIKE &#39;%sp_executesql%&#39;;-- 查找可能包含动态SQL的代码SELECT     TABLE_NAME,    COLUMN_NAMEFROM information_schema.COLUMNSWHERE TABLE_SCHEMA = &#39;your_database&#39;  AND (COLUMN_NAME LIKE &#39;%sql%&#39; OR COLUMN_NAME LIKE &#39;%query%&#39;)  AND TABLE_NAME NOT LIKE &#39;%audit%&#39;;</code></pre><h2 id="%E6%80%BB%E7%BB%93" tabindex="-1">总结</h2><p>通过本篇的深入学习，我们掌握了MySQL备份恢复和安全管理的完整体系：</p><ol><li><strong>备份策略</strong>：逻辑备份、物理备份、增量备份的实战应用</li><li><strong>恢复技术</strong>：时间点恢复、误操作恢复、灾难恢复的完整流程</li><li><strong>安全管理</strong>：权限体系、数据加密、审计监控的全面方案</li><li><strong>安全开发</strong>：SQL注入防护、输入验证的安全编码实践</li></ol><p><strong>关键安全原则：</strong></p><ul><li><strong>最小权限</strong>：用户只拥有完成工作所需的最小权限</li><li><strong>纵深防御</strong>：多层安全措施，避免单点失效</li><li><strong>定期审计</strong>：持续监控和审查安全状态</li><li><strong>应急准备</strong>：完善的备份和恢复预案</li></ul><p><strong>备份恢复最佳实践：</strong></p><ul><li>3-2-1规则：3个副本，2种介质，1个离线存储</li><li>定期恢复演练：确保备份可用性</li><li>监控备份状态：及时发现问题</li><li>加密敏感数据：保护数据隐私</li></ul><p>在下一篇中，我们将探讨MySQL在云原生环境中的应用，包括容器化部署、微服务架构集成等现代技术。</p><p><strong>动手练习：</strong></p><ol><li>设计并实施完整的备份策略，包括完整备份和增量备份</li><li>执行时间点恢复演练，验证备份的可用性</li><li>建立权限管理体系，实施最小权限原则</li><li>配置数据加密和审计日志，增强安全性</li><li>进行安全代码审查，修复潜在的SQL注入漏洞</li></ol><p>欢迎在评论区分享你的备份恢复实践和安全加固经验！</p>]]>
                    </description>
                    <pubDate>Thu, 22 May 2025 23:55:35 EDT</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[MySql入门：高可用与架构设计]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2952</link>
                    <description>
                            <![CDATA[<h1 id="mysql%E9%AB%98%E5%8F%AF%E7%94%A8%E4%B8%8E%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1" tabindex="-1">MySQL高可用与架构设计</h1><blockquote><p>在现代互联网应用中，数据库的高可用性和可扩展性至关重要。单点故障可能导致整个系统瘫痪，性能瓶颈可能影响用户体验。今天，我们将深入探讨MySQL的高可用架构设计，从主从复制到分布式集群，帮助你构建稳定可靠的数据库系统。</p></blockquote><h2 id="1.-%E4%B8%BB%E4%BB%8E%E5%A4%8D%E5%88%B6%E6%9E%B6%E6%9E%84" tabindex="-1">1. 主从复制架构</h2><h3 id="%E5%A4%8D%E5%88%B6%E5%8E%9F%E7%90%86%E4%B8%8E%E4%B8%89%E7%A7%8D%E5%A4%8D%E5%88%B6%E6%A8%A1%E5%BC%8F" tabindex="-1">复制原理与三种复制模式</h3><p><strong>复制基本原理：</strong></p><pre><code class="language-sql">-- 复制过程涉及的关键线程-- 主库：Binlog Dump Thread-- 从库：I/O Thread, SQL Thread-- 查看主库状态SHOW MASTER STATUS;/*+------------------+----------+--------------+------------------+-------------------+| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |+------------------+----------+--------------+------------------+-------------------+| mysql-bin.000003 |      194 |              |                  |                   |+------------------+----------+--------------+------------------+-------------------+*/-- 查看从库状态SHOW SLAVE STATUS\G/*             Slave_IO_State: Waiting for master to send event                Master_Host: 192.168.1.100                Master_User: repl                Master_Port: 3306              Connect_Retry: 60            Master_Log_File: mysql-bin.000003        Read_Master_Log_Pos: 194             Relay_Log_File: relay-bin.000002              Relay_Log_Pos: 320      Relay_Master_Log_File: mysql-bin.000003           Slave_IO_Running: Yes          Slave_SQL_Running: Yes            Replicate_Do_DB:         Replicate_Ignore_DB:          Replicate_Do_Table:      Replicate_Ignore_Table:     Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table:                  Last_Errno: 0                 Last_Error:                Skip_Counter: 0        Exec_Master_Log_Pos: 194            Relay_Log_Space: 526            Until_Condition: None             Until_Log_File:               Until_Log_Pos: 0         Master_SSL_Allowed: No         Master_SSL_CA_File:          Master_SSL_CA_Path:             Master_SSL_Cert:           Master_SSL_Cipher:              Master_SSL_Key:       Seconds_Behind_Master: 0Master_SSL_Verify_Server_Cert: No             Last_IO_Errno: 0             Last_IO_Error:             Last_SQL_Errno: 0            Last_SQL_Error:   Replicate_Ignore_Server_Ids:              Master_Server_Id: 1                  Master_UUID: 6b0f1c1a-5d5e-11eb-ae93-000c29a3a3a3             Master_Info_File: mysql.slave_master_info                    SQL_Delay: 0          SQL_Remaining_Delay: NULL      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates           Master_Retry_Count: 86400                  Master_Bind:       Last_IO_Error_Timestamp:      Last_SQL_Error_Timestamp:                Master_SSL_Crl:            Master_SSL_Crlpath:            Retrieved_Gtid_Set:             Executed_Gtid_Set:                 Auto_Position: 0         Replicate_Rewrite_DB:                  Channel_Name:            Master_TLS_Version: */</code></pre><p><strong>三种复制模式对比：</strong></p><pre><code class="language-sql">-- 1. 基于语句的复制（Statement-Based Replication）-- 配置SET GLOBAL binlog_format = &#39;STATEMENT&#39;;-- 优点：二进制日志较小，网络传输量少-- 缺点：非确定性函数可能导致数据不一致-- 2. 基于行的复制（Row-Based Replication）SET GLOBAL binlog_format = &#39;ROW&#39;;-- 优点：数据一致性更好-- 缺点：二进制日志较大，网络传输量大-- 3. 混合模式复制（Mixed）SET GLOBAL binlog_format = &#39;MIXED&#39;;-- 优点：结合两者优势，自动选择最优方式-- 缺点：配置相对复杂-- 生产环境推荐使用ROW或MIXED模式</code></pre><h3 id="%E5%9F%BA%E4%BA%8E%E4%BA%8C%E8%BF%9B%E5%88%B6%E6%97%A5%E5%BF%97%E7%9A%84%E5%A4%8D%E5%88%B6%E6%9C%BA%E5%88%B6" tabindex="-1">基于二进制日志的复制机制</h3><p><strong>二进制日志配置：</strong></p><pre><code class="language-sql">-- 查看二进制日志配置SHOW VARIABLES LIKE &#39;log_bin%&#39;;SHOW VARIABLES LIKE &#39;binlog_format%&#39;;SHOW VARIABLES LIKE &#39;sync_binlog%&#39;;SHOW VARIABLES LIKE &#39;expire_logs_days%&#39;;-- 二进制日志配置示例（my.cnf）/*[mysqld]# 启用二进制日志log_bin = /var/lib/mysql/mysql-bin# 日志格式binlog_format = ROW# 每次事务提交都同步到磁盘sync_binlog = 1# 日志保留7天expire_logs_days = 7# 每个日志文件大小max_binlog_size = 100M# 自动清理日志binlog_expire_logs_seconds = 604800*/</code></pre><p><strong>复制过滤规则：</strong></p><pre><code class="language-sql">-- 主库过滤规则-- 在my.cnf中配置/*# 忽略系统库的复制binlog_ignore_db = mysqlbinlog_ignore_db = information_schemabinlog_ignore_db = performance_schemabinlog_ignore_db = sys*/-- 从库过滤规则CHANGE MASTER TO     REPLICATE_DO_DB = (app_db),    REPLICATE_IGNORE_DB = (test,temp_db),    REPLICATE_DO_TABLE = (app_db.important_table),    REPLICATE_IGNORE_TABLE = (app_db.log_table);-- 通配符过滤CHANGE MASTER TO    REPLICATE_WILD_DO_TABLE = (&#39;app_db.shard_%&#39;),    REPLICATE_WILD_IGNORE_TABLE = (&#39;app_db.temp_%&#39;);</code></pre><h3 id="%E5%8D%8A%E5%90%8C%E6%AD%A5%E5%A4%8D%E5%88%B6%E9%85%8D%E7%BD%AE%E5%AE%9E%E6%88%98" tabindex="-1">半同步复制配置实战</h3><p><strong>半同步复制原理：</strong></p><pre><code class="language-sql">-- 安装半同步插件（主从库都需要）INSTALL PLUGIN rpl_semi_sync_master SONAME &#39;semisync_master.so&#39;;INSTALL PLUGIN rpl_semi_sync_slave SONAME &#39;semisync_slave.so&#39;;-- 查看插件状态SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME LIKE &#39;%semi%&#39;;-- 配置主库半同步SET GLOBAL rpl_semi_sync_master_enabled = 1;SET GLOBAL rpl_semi_sync_master_timeout = 1000;  -- 1秒超时-- 配置从库半同步SET GLOBAL rpl_semi_sync_slave_enabled = 1;-- 查看半同步状态SHOW STATUS LIKE &#39;Rpl_semi_sync%&#39;;/*Rpl_semi_sync_master_status          | ONRpl_semi_sync_master_clients         | 2      -- 连接的半同步从库数量Rpl_semi_sync_master_yes_tx          | 1000   -- 成功通过半同步的事务数Rpl_semi_sync_master_no_tx           | 5      -- 超时后转为异步的事务数*/</code></pre><p><strong>半同步复制配置优化：</strong></p><pre><code class="language-sql">-- 持久化配置（在my.cnf中）/*[mysqld]# 主库配置plugin_load = &quot;rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so&quot;rpl_semi_sync_master_enabled = 1rpl_semi_sync_slave_enabled = 1rpl_semi_sync_master_timeout = 1000rpl_semi_sync_master_wait_point = AFTER_SYNC  -- MySQL 5.7+ 推荐*/-- 监控半同步复制SELECT     VARIABLE_NAME,    VARIABLE_VALUEFROM performance_schema.global_statusWHERE VARIABLE_NAME LIKE &#39;RPL_SEMI_SYNC%&#39;;-- 半同步复制降级监控-- 当从库响应超时或故障时，主库会自动降级为异步复制-- 需要监控降级事件并及时处理</code></pre><h3 id="%E5%A4%9A%E6%BA%90%E5%A4%8D%E5%88%B6%E4%B8%8E%E9%93%BE%E5%BC%8F%E5%A4%8D%E5%88%B6" tabindex="-1">多源复制与链式复制</h3><p><strong>多源复制配置：</strong></p><pre><code class="language-sql">-- MySQL 5.7+ 支持多源复制-- 从多个主库复制数据到单个从库-- 配置多源复制通道-- 主库1配置CHANGE MASTER TO     MASTER_HOST = &#39;master1_host&#39;,    MASTER_USER = &#39;repl&#39;,    MASTER_PASSWORD = &#39;password&#39;,    MASTER_PORT = 3306,    MASTER_AUTO_POSITION = 1FOR CHANNEL &#39;master1&#39;;-- 主库2配置  CHANGE MASTER TO    MASTER_HOST = &#39;master2_host&#39;,    MASTER_USER = &#39;repl&#39;,    MASTER_PASSWORD = &#39;password&#39;,    MASTER_PORT = 3306,    MASTER_AUTO_POSITION = 1FOR CHANNEL &#39;master2&#39;;-- 启动多源复制START SLAVE FOR CHANNEL &#39;master1&#39;;START SLAVE FOR CHANNEL &#39;master2&#39;;-- 查看多源复制状态SHOW SLAVE STATUS FOR CHANNEL &#39;master1&#39;\GSHOW SLAVE STATUS FOR CHANNEL &#39;master2&#39;\G-- 按通道过滤操作STOP SLAVE SQL_THREAD FOR CHANNEL &#39;master1&#39;;START SLAVE SQL_THREAD FOR CHANNEL &#39;master1&#39;;</code></pre><p><strong>链式复制架构：</strong></p><pre><code class="language-sql">-- 三级复制链：Master -&gt; Relay Slave -&gt; Leaf Slave-- 配置中继从库/*Master配置:log_bin = onlog_slave_updates = off  -- 默认，中继库不需要记录从库更新Relay Slave配置:log_bin = onlog_slave_updates = on   -- 关键：记录从主库接收的更新Leaf Slave配置:log_bin = off  -- 或者 on，根据需求log_slave_updates = off*/-- 中继从库的特殊配置/*[mysqld]# 中继从库配置server_id = 2log_bin = mysql-binlog_slave_updates = 1relay_log = relay-binread_only = 1# 过滤规则（可选）replicate_do_db = app_dbreplicate_ignore_db = mysql*/</code></pre><h3 id="%E5%A4%8D%E5%88%B6%E6%95%85%E9%9A%9C%E6%8E%92%E6%9F%A5%E4%B8%8E%E4%BF%AE%E5%A4%8D" tabindex="-1">复制故障排查与修复</h3><p><strong>常见复制错误处理：</strong></p><pre><code class="language-sql">-- 1. 主键冲突错误-- 错误信息：Duplicate entry &#39;X&#39; for key &#39;PRIMARY&#39;-- 解决方案：STOP SLAVE;SET GLOBAL sql_slave_skip_counter = 1;START SLAVE;-- 或者手动处理冲突数据STOP SLAVE;-- 查看冲突数据SELECT * FROM table_name WHERE primary_key = &#39;X&#39;;-- 删除冲突数据或更新主键DELETE FROM table_name WHERE primary_key = &#39;X&#39;;START SLAVE;-- 2. 数据不存在错误-- 错误信息：Can&#39;t find record in &#39;table_name&#39;-- 解决方案：STOP SLAVE;-- 在从库插入缺失的数据INSERT IGNORE INTO table_name VALUES (...);START SLAVE;-- 3. 网络中断导致的复制延迟-- 监控复制延迟SHOW SLAVE STATUS\G-- 查看Seconds_Behind_Master-- 自动重连配置CHANGE MASTER TO     MASTER_CONNECT_RETRY = 60,    MASTER_RETRY_COUNT = 86400;</code></pre><p><strong>GTID复制故障处理：</strong></p><pre><code class="language-sql">-- 启用GTID复制-- 在my.cnf中配置/*[mysqld]gtid_mode = ONenforce_gtid_consistency = ON*/-- GTID复制错误处理-- 查看错误的GTIDSHOW SLAVE STATUS\G-- Last_SQL_Error: Coordinator stopped because there were error(s) in the worker(s)...-- Retrieved_Gtid_Set: 6b0f1c1a-5d5e-11eb-ae93-000c29a3a3a3:1-100-- Executed_Gtid_Set: 6b0f1c1a-5d5e-11eb-ae93-000c29a3a3a3:1-95-- 跳过特定GTID事务STOP SLAVE;SET GTID_NEXT = &#39;6b0f1c1a-5d5e-11eb-ae93-000c29a3a3a3:96&#39;;BEGIN; COMMIT;SET GTID_NEXT = &#39;AUTOMATIC&#39;;START SLAVE;-- 重置GTID复制-- 注意：这会清除所有复制信息，需要重新配置STOP SLAVE;RESET SLAVE ALL;CHANGE MASTER TO ...;START SLAVE;</code></pre><h2 id="2.-%E9%AB%98%E5%8F%AF%E7%94%A8%E9%9B%86%E7%BE%A4%E6%96%B9%E6%A1%88" tabindex="-1">2. 高可用集群方案</h2><h3 id="mysql-router%E8%AF%BB%E5%86%99%E5%88%86%E7%A6%BB" tabindex="-1">MySQL Router读写分离</h3><p><strong>MySQL Router部署配置：</strong></p><pre><code class="language-ini"># MySQL Router配置文件 (mysqlrouter.conf)[DEFAULT]logging_folder = /var/log/mysqlrouterruntime_folder = /var/run/mysqlrouterconfig_folder = /etc/mysqlrouter[routing:read_write]bind_address = 0.0.0.0bind_port = 6446destinations = metadata-cache://mycluster/?role=PRIMARYrouting_strategy = first-available[routing:read_only]bind_address = 0.0.0.0bind_port = 6447destinations = metadata-cache://mycluster/?role=SECONDARYrouting_strategy = round-robin# 启动MySQL Router# mysqlrouter --config=/etc/mysqlrouter/mysqlrouter.conf &amp;</code></pre><p><strong>应用程序连接配置：</strong></p><pre><code class="language-python"># Python应用程序连接示例import mysql.connector# 写操作连接（主库）write_config = {    &#39;host&#39;: &#39;router_host&#39;,    &#39;port&#39;: 6446,  # 读写端口    &#39;user&#39;: &#39;app_user&#39;,    &#39;password&#39;: &#39;password&#39;,    &#39;database&#39;: &#39;app_db&#39;}# 读操作连接（从库）read_config = {    &#39;host&#39;: &#39;router_host&#39;,     &#39;port&#39;: 6447,  # 只读端口    &#39;user&#39;: &#39;app_user&#39;,    &#39;password&#39;: &#39;password&#39;,    &#39;database&#39;: &#39;app_db&#39;}# 写操作def update_user_profile(user_id, data):    conn = mysql.connector.connect(**write_config)    # 执行更新操作    conn.close()# 读操作  def get_user_profile(user_id):    conn = mysql.connector.connect(**read_config)    # 执行查询操作    conn.close()</code></pre><h3 id="mha%E8%87%AA%E5%8A%A8%E6%95%85%E9%9A%9C%E8%BD%AC%E7%A7%BB" tabindex="-1">MHA自动故障转移</h3><p><strong>MHA架构组成：</strong></p><pre><code class="language-bash"># MHA组件# 1. MHA Manager - 管理节点# 2. MHA Node - 数据节点代理# MHA Manager配置 (app1.cnf)[server default]manager_log=/var/log/masterha/app1.logmanager_workdir=/var/log/masterha/app1master_binlog_dir=/var/lib/mysqluser=mha_userpassword=mha_passwordping_interval=3remote_workdir=/tmprepl_user=repl_userrepl_password=repl_passwordssh_user=root[server1]hostname=master_hostport=3306[server2] hostname=slave1_hostport=3306candidate_master=1[server3]hostname=slave2_hostport=3306no_master=1# 启动MHA监控masterha_manager --conf=/etc/masterha/app1.cnf</code></pre><p><strong>MHA故障转移过程：</strong></p><pre><code class="language-bash"># 1. 检测主库故障# 2. 选择新主库（优先candidate_master=1的从库）# 3. 应用差异的二进制日志# 4. 提升新主库# 5. 其他从库指向新主库# 6. 虚拟IP切换（可选）# 手动触发故障转移masterha_master_switch --conf=/etc/masterha/app1.cnf --master_state=dead# 检查MHA状态masterha_check_status --conf=/etc/masterha/app1.cnf# MHA监控脚本示例#!/bin/bash# mha_monitor.shCONFIG_FILE=&quot;/etc/masterha/app1.cnf&quot;LOG_FILE=&quot;/var/log/masterha/monitor.log&quot;while true; do    status=$(masterha_check_status --conf=$CONFIG_FILE 2&gt;&amp;1)    if [[ $status != *&quot;alive&quot;* ]]; then        echo &quot;$(date): MHA manager is not running, restarting...&quot; &gt;&gt; $LOG_FILE        nohup masterha_manager --conf=$CONFIG_FILE &gt;&gt; $LOG_FILE 2&gt;&amp;1 &amp;    fi    sleep 30done</code></pre><h3 id="orchestrator%E7%AE%A1%E7%90%86%E5%B7%A5%E5%85%B7" tabindex="-1">Orchestrator管理工具</h3><p><strong>Orchestrator部署配置：</strong></p><pre><code class="language-json">// orchestrator.conf.json{  &quot;Debug&quot;: false,  &quot;EnableSyslog&quot;: false,    &quot;MySQLTopologyUser&quot;: &quot;orchestrator&quot;,  &quot;MySQLTopologyPassword&quot;: &quot;orchestrator_password&quot;,  &quot;MySQLTopologyCredentialsConfigFile&quot;: &quot;&quot;,  &quot;MySQLTopologySSLPrivateKeyFile&quot;: &quot;&quot;,  &quot;MySQLTopologySSLCertFile&quot;: &quot;&quot;,  &quot;MySQLTopologySSLCAFile&quot;: &quot;&quot;,  &quot;MySQLTopologySSLSkipVerify&quot;: true,  &quot;MySQLTopologyUseMutualTLS&quot;: false,    &quot;MySQLOrchestratorHost&quot;: &quot;127.0.0.1&quot;,  &quot;MySQLOrchestratorPort&quot;: 3306,  &quot;MySQLOrchestratorDatabase&quot;: &quot;orchestrator&quot;,  &quot;MySQLOrchestratorUser&quot;: &quot;orchestrator&quot;,  &quot;MySQLOrchestratorPassword&quot;: &quot;orchestrator_password&quot;,    &quot;RaftEnabled&quot;: true,  &quot;RaftDataDir&quot;: &quot;/var/lib/orchestrator&quot;,  &quot;RaftBind&quot;: &quot;192.168.1.100&quot;,  &quot;DefaultRaftPort&quot;: 10008,    &quot;AutoPseudoGTID&quot;: false,  &quot;DetectClusterAliasQuery&quot;: &quot;SELECT SUBSTRING_INDEX(@@hostname, &#39;.&#39;, 1)&quot;,  &quot;DetectInstanceAliasQuery&quot;: &quot;SELECT @@hostname&quot;,    &quot;RecoveryPeriodBlockSeconds&quot;: 3600,  &quot;RecoveryIgnoreHostnameFilters&quot;: [],    &quot;PromotionIgnoreHostnameFilters&quot;: [],    &quot;ApplyMySQLPromotionAfterMasterFailover&quot;: true,  &quot;PreFailoverProcesses&quot;: [    &quot;echo &#39;Will recover from {failureType} on {failureCluster}&#39; &gt;&gt; /tmp/recovery.log&quot;  ],  &quot;PostFailoverProcesses&quot;: [    &quot;echo &#39;Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Successor: {successorHost}:{successorPort}&#39; &gt;&gt; /tmp/recovery.log&quot;  ]}</code></pre><p><strong>Orchestrator API使用：</strong></p><pre><code class="language-bash"># 通过REST API管理集群# 发现并注册实例curl &quot;http://orchestrator:3000/api/discover/192.168.1.101/3306&quot;# 查看集群拓扑curl &quot;http://orchestrator:3000/api/cluster/myapp&quot;# 手动故障转移curl &quot;http://orchestrator:3000/api/force-master-failover/myapp&quot;# 查看恢复信息curl &quot;http://orchestrator:3000/api/audit-recovery&quot;# 维护模式curl &quot;http://orchestrator:3000/api/maintenance/myapp/begin&quot;curl &quot;http://orchestrator:3000/api/maintenance/myapp/end&quot;</code></pre><h3 id="%E5%9F%BA%E4%BA%8Ekeepalived%E7%9A%84vip%E6%96%B9%E6%A1%88" tabindex="-1">基于Keepalived的VIP方案</h3><p><strong>Keepalived配置：</strong></p><pre><code class="language-bash"># keepalived.confglobal_defs {    router_id MYSQL_HA}vrrp_script chk_mysql {    script &quot;/usr/bin/mysqlchk&quot;    interval 2    weight 2    fall 2    rise 2}vrrp_instance VI_1 {    state BACKUP    interface eth0    virtual_router_id 51    priority 100    advert_int 1        authentication {        auth_type PASS        auth_pass 1111    }        virtual_ipaddress {        192.168.1.200    }        track_script {        chk_mysql    }        notify_master &quot;/etc/keepalived/notify.sh master&quot;    notify_backup &quot;/etc/keepalived/notify.sh backup&quot;    notify_fault &quot;/etc/keepalived/notify.sh fault&quot;}</code></pre><p><strong>MySQL健康检查脚本：</strong></p><pre><code class="language-bash">#!/bin/bash# mysqlchk - MySQL健康检查脚本MYSQL_HOST=&quot;localhost&quot;MYSQL_PORT=&quot;3306&quot;MYSQL_USER=&quot;health_check&quot;MYSQL_PASS=&quot;health_check_password&quot;MYSQL_CMD=&quot;/usr/bin/mysql&quot;# 检查MySQL是否可连接$MYSQL_CMD -h $MYSQL_HOST -P $MYSQL_PORT -u $MYSQL_USER -p$MYSQL_PASS -e &quot;SELECT 1;&quot; &gt; /dev/null 2&gt;&amp;1if [ $? -eq 0 ]; then    # 检查复制状态（如果是从库）    SLAVE_STATUS=$($MYSQL_CMD -h $MYSQL_HOST -P $MYSQL_PORT -u $MYSQL_USER -p$MYSQL_PASS -e &quot;SHOW SLAVE STATUS\G&quot; 2&gt;/dev/null)        if [ -n &quot;$SLAVE_STATUS&quot; ]; then        # 是从库，检查复制状态        IO_RUNNING=$(echo &quot;$SLAVE_STATUS&quot; | grep &quot;Slave_IO_Running:&quot; | awk &#39;{print $2}&#39;)        SQL_RUNNING=$(echo &quot;$SLAVE_STATUS&quot; | grep &quot;Slave_SQL_Running:&quot; | awk &#39;{print $2}&#39;)        SECONDS_BEHIND=$(echo &quot;$SLAVE_STATUS&quot; | grep &quot;Seconds_Behind_Master:&quot; | awk &#39;{print $2}&#39;)                if [ &quot;$IO_RUNNING&quot; = &quot;Yes&quot; ] &amp;&amp; [ &quot;$SQL_RUNNING&quot; = &quot;Yes&quot; ] &amp;&amp; [ &quot;$SECONDS_BEHIND&quot; -lt 60 ]; then            exit 0  # 健康        else            exit 1  # 不健康        fi    else        # 是主库，直接健康        exit 0    fielse    exit 1  # MySQL不可连接fi</code></pre><p><strong>状态切换通知脚本：</strong></p><pre><code class="language-bash">#!/bin/bash# notify.sh - 状态切换通知TYPE=$1VIP=&quot;192.168.1.200&quot;LOG_FILE=&quot;/var/log/keepalived.log&quot;log() {    echo &quot;$(date): $1&quot; &gt;&gt; $LOG_FILE}case $TYPE in    master)        log &quot;切换为MASTER状态，绑定VIP: $VIP&quot;        # 这里可以添加提升为主库的逻辑        # 比如设置read_only=OFF，通知应用等        mysql -e &quot;SET GLOBAL read_only=OFF;&quot;        ;;    backup)        log &quot;切换为BACKUP状态，释放VIP&quot;        # 设置只读模式        mysql -e &quot;SET GLOBAL read_only=ON;&quot;        ;;    fault)        log &quot;进入FAULT状态&quot;        ;;    *)        log &quot;未知状态: $TYPE&quot;        ;;esac</code></pre><h3 id="%E9%AB%98%E5%8F%AF%E7%94%A8%E6%9E%B6%E6%9E%84%E9%80%89%E5%9E%8B%E6%8C%87%E5%8D%97" tabindex="-1">高可用架构选型指南</h3><p><strong>架构选型矩阵：</strong></p><table><thead><tr><th>方案</th><th>适用场景</th><th>优点</th><th>缺点</th><th>复杂度</th></tr></thead><tbody><tr><td><strong>主从+VIP</strong></td><td>中小型应用，预算有限</td><td>简单可靠，成本低</td><td>手动切换，监控复杂</td><td>低</td></tr><tr><td><strong>MHA</strong></td><td>中型应用，需要自动故障转移</td><td>自动故障转移，成熟稳定</td><td>需要额外管理节点</td><td>中</td></tr><tr><td><strong>Orchestrator</strong></td><td>复杂拓扑，需要灵活管理</td><td>拓扑感知，API丰富</td><td>配置复杂，学习成本高</td><td>高</td></tr><tr><td><strong>MySQL InnoDB Cluster</strong></td><td>MySQL 8.0，原生高可用</td><td>官方方案，集成度高</td><td>版本要求高，资源消耗大</td><td>中</td></tr><tr><td><strong>云数据库</strong></td><td>快速部署，免运维</td><td>全托管，自动备份</td><td>成本较高，厂商锁定</td><td>低</td></tr></tbody></table><p><strong>选型考虑因素：</strong></p><pre><code class="language-sql">-- 业务需求评估-- 1. RTO（恢复时间目标）SELECT     CASE         WHEN rto_requirement &lt;= 30 THEN &#39;需要自动故障转移&#39;        WHEN rto_requirement &lt;= 300 THEN &#39;半自动故障转移&#39;        ELSE &#39;手动故障转移可接受&#39;    END as ha_levelFROM business_requirements;-- 2. RPO（数据恢复点目标）SELECT     CASE        WHEN rpo_requirement = 0 THEN &#39;需要同步复制&#39;        WHEN rpo_requirement &lt;= 1 THEN &#39;需要半同步复制&#39;         WHEN rpo_requirement &lt;= 60 THEN &#39;异步复制可接受&#39;        ELSE &#39;数据丢失可接受&#39;    END as data_protection_levelFROM business_requirements;-- 3. 读写分离需求SELECT     CASE        WHEN read_ratio &gt; 0.8 THEN &#39;需要强大的读写分离&#39;        WHEN read_ratio &gt; 0.5 THEN &#39;需要基础读写分离&#39;        ELSE &#39;读写分离非必需&#39;    END as read_write_separationFROM workload_analysis;</code></pre><h2 id="3.-%E6%95%B0%E6%8D%AE%E5%BA%93%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1" tabindex="-1">3. 数据库架构设计</h2><h3 id="%E8%AF%BB%E5%86%99%E5%88%86%E7%A6%BB%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1" tabindex="-1">读写分离架构设计</h3><p><strong>应用层读写分离：</strong></p><pre><code class="language-java">// Java应用层读写分离示例@Componentpublic class DataSourceRouter {        @Value(&quot;${datasource.master.url}&quot;)    private String masterUrl;        @Value(&quot;${datasource.slave.url}&quot;)     private String slaveUrl;        private ThreadLocal&lt;Boolean&gt; readOnly = new ThreadLocal&lt;&gt;();        public void setReadOnly(boolean readOnly) {        this.readOnly.set(readOnly);    }        public DataSource getDataSource() {        if (Boolean.TRUE.equals(readOnly.get())) {            return createDataSource(slaveUrl);        } else {            return createDataSource(masterUrl);        }    }        // AOP切面自动设置读写分离    @Aspect    @Component    public class ReadWriteSeparationAspect {                @Around(&quot;@annotation(org.springframework.transaction.annotation.Transactional)&quot;)        public Object handleTransaction(ProceedingJoinPoint joinPoint) throws Throwable {            Transactional transactional = ((MethodSignature) joinPoint.getSignature())                .getMethod().getAnnotation(Transactional.class);                        if (transactional.readOnly()) {                DataSourceContextHolder.setReadOnly(true);            }                        try {                return joinPoint.proceed();            } finally {                DataSourceContextHolder.clear();            }        }    }}</code></pre><p><strong>中间件读写分离：</strong></p><pre><code class="language-yaml"># ShardingSphere配置示例# config-sharding.yamldataSources:  master_ds:    url: jdbc:mysql://master:3306/db?serverTimezone=UTC&amp;useSSL=false    username: root    password: password    connectionTimeoutMilliseconds: 30000    idleTimeoutMilliseconds: 60000    maxLifetimeMilliseconds: 1800000    maxPoolSize: 50  slave_ds_0:    url: jdbc:mysql://slave0:3306/db?serverTimezone=UTC&amp;useSSL=false    username: root    password: password    connectionTimeoutMilliseconds: 30000    idleTimeoutMilliseconds: 60000    maxLifetimeMilliseconds: 1800000    maxPoolSize: 50  slave_ds_1:    url: jdbc:mysql://slave1:3306/db?serverTimezone=UTC&amp;useSSL=false    username: root    password: password    connectionTimeoutMilliseconds: 30000    idleTimeoutMilliseconds: 60000    maxLifetimeMilliseconds: 1800000    maxPoolSize: 50rules:- !LOAD_BALANCE  loadBalancers:    round_robin:      type: ROUND_ROBIN  dataSources:    read_ds:      dataSourceNames:        - slave_ds_0        - slave_ds_1      loadBalancerName: round_robin      - !SINGLE  defaultDataSource: master_ds  loadBalancers:    round_robin:      type: ROUND_ROBIN</code></pre><h3 id="%E5%88%86%E5%BA%93%E5%88%86%E8%A1%A8%E7%AD%96%E7%95%A5%E4%B8%8E%E5%AE%9E%E7%8E%B0" tabindex="-1">分库分表策略与实现</h3><p><strong>水平分表策略：</strong></p><pre><code class="language-sql">-- 用户表按ID范围分表-- 创建分表CREATE TABLE users_0000 LIKE users_template;CREATE TABLE users_0001 LIKE users_template;CREATE TABLE users_0002 LIKE users_template;-- ... 创建更多分表-- 分表路由函数DELIMITER //CREATE FUNCTION get_user_table_name(user_id BIGINT)RETURNS VARCHAR(64)DETERMINISTICBEGIN    DECLARE table_suffix VARCHAR(4);    SET table_suffix = LPAD(MOD(user_id, 16), 4, &#39;0&#39;);    RETURN CONCAT(&#39;users_&#39;, table_suffix);END //DELIMITER ;-- 分表查询示例SET @user_id = 123456;SET @table_name = get_user_table_name(@user_id);SET @sql = CONCAT(&#39;SELECT * FROM &#39;, @table_name, &#39; WHERE user_id = ?&#39;);PREPARE stmt FROM @sql;EXECUTE stmt USING @user_id;DEALLOCATE PREPARE stmt;</code></pre><p><strong>垂直分库设计：</strong></p><pre><code class="language-sql">-- 业务垂直拆分-- 用户库CREATE DATABASE user_center;USE user_center;CREATE TABLE users (    user_id BIGINT PRIMARY KEY,    username VARCHAR(50),    email VARCHAR(100),    password_hash VARCHAR(255),    created_at TIMESTAMP);CREATE TABLE user_profiles (    user_id BIGINT PRIMARY KEY,    real_name VARCHAR(100),    avatar_url VARCHAR(500),    bio TEXT);-- 订单库CREATE DATABASE order_center;USE order_center;CREATE TABLE orders (    order_id BIGINT PRIMARY KEY,    user_id BIGINT,  -- 跨库关联    total_amount DECIMAL(12,2),    status VARCHAR(20),    created_at TIMESTAMP);CREATE TABLE order_items (    item_id BIGINT PRIMARY KEY,    order_id BIGINT,    product_id BIGINT,    quantity INT,    price DECIMAL(10,2));-- 商品库CREATE DATABASE product_center;USE product_center;CREATE TABLE products (    product_id BIGINT PRIMARY KEY,    product_name VARCHAR(200),    category_id INT,    price DECIMAL(10,2),    stock_quantity INT);</code></pre><h3 id="%E6%95%B0%E6%8D%AE%E6%8B%86%E5%88%86%EF%BC%9A%E5%9E%82%E7%9B%B4%E6%8B%86%E5%88%86%E4%B8%8E%E6%B0%B4%E5%B9%B3%E6%8B%86%E5%88%86" tabindex="-1">数据拆分：垂直拆分与水平拆分</h3><p><strong>垂直拆分实施：</strong></p><pre><code class="language-sql">-- 原始大表CREATE TABLE user_comprehensive (    user_id BIGINT PRIMARY KEY,    -- 基础信息    username VARCHAR(50),    email VARCHAR(100),    password_hash VARCHAR(255),    -- 个人信息    real_name VARCHAR(100),    id_card VARCHAR(20),    phone VARCHAR(20),    -- 扩展信息    education VARCHAR(50),    occupation VARCHAR(50),    income_level INT,    -- 行为信息    last_login_time TIMESTAMP,    login_count INT,    -- 其他字段...    created_at TIMESTAMP,    updated_at TIMESTAMP);-- 垂直拆分后-- 用户基础表（高频访问）CREATE TABLE users_basic (    user_id BIGINT PRIMARY KEY,    username VARCHAR(50),    email VARCHAR(100),    password_hash VARCHAR(255),    last_login_time TIMESTAMP,    login_count INT,    created_at TIMESTAMP);-- 用户详情表（低频访问）CREATE TABLE users_detail (    user_id BIGINT PRIMARY KEY,    real_name VARCHAR(100),    id_card VARCHAR(20),    phone VARCHAR(20),    education VARCHAR(50),    occupation VARCHAR(50),    income_level INT,    updated_at TIMESTAMP);-- 创建索引优化查询ALTER TABLE users_basic ADD INDEX idx_username (username);ALTER TABLE users_basic ADD INDEX idx_email (email);ALTER TABLE users_detail ADD INDEX idx_phone (phone);</code></pre><p><strong>水平拆分策略：</strong></p><pre><code class="language-sql">-- 时间范围分表（适用于时间序列数据）-- 按月分表CREATE TABLE logs_2023_01 LIKE logs_template;CREATE TABLE logs_2023_02 LIKE logs_template;CREATE TABLE logs_2023_03 LIKE logs_template;-- 时间分表管理存储过程DELIMITER //CREATE PROCEDURE create_next_month_table()BEGIN    DECLARE next_month VARCHAR(7);    DECLARE table_name VARCHAR(64);    DECLARE create_sql TEXT;        SET next_month = DATE_FORMAT(DATE_ADD(NOW(), INTERVAL 1 MONTH), &#39;%Y_%m&#39;);    SET table_name = CONCAT(&#39;logs_&#39;, next_month);    SET create_sql = CONCAT(&#39;CREATE TABLE IF NOT EXISTS &#39;, table_name, &#39; LIKE logs_template&#39;);        PREPARE stmt FROM create_sql;    EXECUTE stmt;    DEALLOCATE PREPARE stmt;        -- 记录创建日志    INSERT INTO table_creation_log (table_name, created_at)     VALUES (table_name, NOW());END //DELIMITER ;-- 地理分表（适用于地域性数据）CREATE TABLE users_north LIKE users_template;  -- 北方用户CREATE TABLE users_south LIKE users_template;  -- 南方用户CREATE TABLE users_east LIKE users_template;   -- 东方用户  CREATE TABLE users_west LIKE users_template;   -- 西方用户-- 基于业务特征分表CREATE TABLE users_vip LIKE users_template;    -- VIP用户CREATE TABLE users_normal LIKE users_template; -- 普通用户CREATE TABLE users_trial LIKE users_template;  -- 试用用户</code></pre><h3 id="%E5%88%86%E5%B8%83%E5%BC%8Fid%E7%94%9F%E6%88%90%E6%96%B9%E6%A1%88" tabindex="-1">分布式ID生成方案</h3><p><strong>数据库序列方案：</strong></p><pre><code class="language-sql">-- 基于数据库的ID生成器CREATE TABLE sequence_generator (    sequence_name VARCHAR(50) PRIMARY KEY,    current_value BIGINT NOT NULL DEFAULT 0,    step INT NOT NULL DEFAULT 1,    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP);-- 获取下一个ID的存储过程DELIMITER //CREATE FUNCTION next_id(seq_name VARCHAR(50))RETURNS BIGINTBEGIN    DECLARE current_val BIGINT;    DECLARE retry_count INT DEFAULT 0;    DECLARE max_retries INT DEFAULT 3;        retry_loop: WHILE retry_count &lt; max_retries DO        -- 获取当前值        SELECT current_value INTO current_val        FROM sequence_generator         WHERE sequence_name = seq_name;                IF current_val IS NULL THEN            -- 初始化序列            INSERT INTO sequence_generator (sequence_name, current_value)             VALUES (seq_name, 1)            ON DUPLICATE KEY UPDATE current_value = 1;            SET current_val = 1;        END IF;                -- 尝试更新        UPDATE sequence_generator         SET current_value = current_value + step,            updated_at = CURRENT_TIMESTAMP        WHERE sequence_name = seq_name           AND current_value = current_val;                IF ROW_COUNT() = 1 THEN            RETURN current_val + 1;        END IF;                SET retry_count = retry_count + 1;        DO SLEEP(0.01);  -- 短暂等待后重试    END WHILE;        -- 重试失败，抛出异常    SIGNAL SQLSTATE &#39;45000&#39; SET MESSAGE_TEXT = &#39;Failed to generate sequence ID&#39;;END //DELIMITER ;</code></pre><p><strong>Snowflake算法实现：</strong></p><pre><code class="language-sql">-- Snowflake ID生成器表CREATE TABLE snowflake_worker (    worker_id INT PRIMARY KEY,    datacenter_id INT NOT NULL,    worker_name VARCHAR(100),    last_timestamp BIGINT,    sequence BIGINT DEFAULT 0,    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP);-- Snowflake ID生成函数DELIMITER //CREATE FUNCTION snowflake_next_id(worker_id INT)RETURNS BIGINTBEGIN    DECLARE epoch BIGINT DEFAULT 1609459200000; -- 2021-01-01    DECLARE current_ms BIGINT;    DECLARE last_ms BIGINT;    DECLARE sequence_val BIGINT;    DECLARE datacenter_id_val INT;        -- 获取worker信息    SELECT last_timestamp, sequence, datacenter_id     INTO last_ms, sequence_val, datacenter_id_val    FROM snowflake_worker     WHERE worker_id = worker_id    FOR UPDATE;  -- 加锁防止并发        -- 计算当前时间戳    SET current_ms = (UNIX_TIMESTAMP(NOW(3)) * 1000);        IF current_ms &lt; last_ms THEN        SIGNAL SQLSTATE &#39;45000&#39; SET MESSAGE_TEXT = &#39;Clock moved backwards&#39;;    END IF;        IF current_ms = last_ms THEN        SET sequence_val = (sequence_val + 1) &amp; 4095;  -- 12位序列号，最大4095        IF sequence_val = 0 THEN            -- 序列号耗尽，等待下一毫秒            SET current_ms = wait_next_ms(last_ms);        END IF;    ELSE        SET sequence_val = 0;    END IF;        -- 更新worker状态    UPDATE snowflake_worker     SET last_timestamp = current_ms,        sequence = sequence_val    WHERE worker_id = worker_id;        -- 生成ID: 时间戳(41位) + 数据中心ID(5位) + 工作节点ID(5位) + 序列号(12位)    RETURN ((current_ms - epoch) &lt;&lt; 22)          | (datacenter_id_val &lt;&lt; 17)          | (worker_id &lt;&lt; 12)          | sequence_val;END //CREATE FUNCTION wait_next_ms(last_ms BIGINT)RETURNS BIGINTBEGIN    DECLARE current_ms BIGINT;    SET current_ms = (UNIX_TIMESTAMP(NOW(3)) * 1000);    WHILE current_ms &lt;= last_ms DO        SET current_ms = (UNIX_TIMESTAMP(NOW(3)) * 1000);    END WHILE;    RETURN current_ms;END //DELIMITER ;</code></pre><h3 id="%E6%95%B0%E6%8D%AE%E8%BF%81%E7%A7%BB%E4%B8%8E%E5%90%8C%E6%AD%A5%E6%96%B9%E6%A1%88" tabindex="-1">数据迁移与同步方案</h3><p><strong>在线数据迁移：</strong></p><pre><code class="language-sql">-- 双写迁移方案-- 1. 准备阶段：创建新表，建立双写机制CREATE TABLE users_new LIKE users_old;-- 2. 数据同步阶段：存量数据迁移INSERT INTO users_new SELECT * FROM users_old WHERE id &gt; ? AND id &lt;= ?;  -- 分批迁移-- 3. 增量数据双写-- 应用程序同时写入users_old和users_new-- 4. 数据验证SELECT     COUNT(*) as old_count,    (SELECT COUNT(*) FROM users_new) as new_count,    COUNT(*) - (SELECT COUNT(*) FROM users_new) as diffFROM users_old;-- 5. 切换阶段：停止写入旧表，全面使用新表-- 6. 清理阶段：删除旧表-- 使用pt-online-schema-change工具-- pt-online-schema-change --alter=&quot;ADD COLUMN new_column INT&quot; D=database,t=table --execute</code></pre><p><strong>数据同步监控：</strong></p><pre><code class="language-sql">-- 创建数据同步监控表CREATE TABLE data_sync_monitor (    id BIGINT AUTO_INCREMENT PRIMARY KEY,    sync_job VARCHAR(100) NOT NULL,    source_count BIGINT,    target_count BIGINT,    diff_count BIGINT,    sync_status ENUM(&#39;running&#39;, &#39;completed&#39;, &#39;failed&#39;),    started_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,    completed_at TIMESTAMP NULL,    error_message TEXT);-- 数据一致性检查存储过程DELIMITER //CREATE PROCEDURE check_data_consistency(    IN source_table VARCHAR(64),    IN target_table VARCHAR(64),    IN primary_key VARCHAR(64))BEGIN    DECLARE source_total BIGINT;    DECLARE target_total BIGINT;    DECLARE diff_count BIGINT;        -- 检查记录总数    SET @source_sql = CONCAT(&#39;SELECT COUNT(*) INTO @source_count FROM &#39;, source_table);    PREPARE stmt1 FROM @source_sql;    EXECUTE stmt1;    DEALLOCATE PREPARE stmt1;        SET @target_sql = CONCAT(&#39;SELECT COUNT(*) INTO @target_count FROM &#39;, target_table);    PREPARE stmt2 FROM @target_sql;    EXECUTE stmt2;    DEALLOCATE PREPARE stmt2;        SET source_total = @source_count;    SET target_total = @target_count;    SET diff_count = ABS(source_total - target_total);        -- 记录检查结果    INSERT INTO data_sync_monitor (sync_job, source_count, target_count, diff_count, sync_status)    VALUES (CONCAT(source_table, &#39;_to_&#39;, target_table), source_total, target_total, diff_count,            CASE WHEN diff_count = 0 THEN &#39;completed&#39; ELSE &#39;failed&#39; END);        -- 如果有差异，记录具体差异数据    IF diff_count &gt; 0 THEN        -- 这里可以添加更详细的差异分析        INSERT INTO data_diff_log (sync_job, diff_type, diff_details)        VALUES (CONCAT(source_table, &#39;_to_&#39;, target_table), &#39;count_mismatch&#39;,                CONCAT(&#39;Source: &#39;, source_total, &#39;, Target: &#39;, target_total));    END IF;    END //DELIMITER ;</code></pre><h2 id="%E6%80%BB%E7%BB%93" tabindex="-1">总结</h2><p>通过本篇的深入学习，我们掌握了MySQL高可用架构设计的核心知识：</p><ol><li><strong>主从复制</strong>：理解了复制原理、配置方法和故障处理</li><li><strong>高可用方案</strong>：掌握了MHA、Orchestrator、Keepalived等工具的使用</li><li><strong>架构设计</strong>：学会了读写分离、分库分表、分布式ID生成等高级技术</li><li><strong>数据迁移</strong>：了解了在线数据迁移和同步的最佳实践</li></ol><p><strong>关键架构原则：</strong></p><ul><li><strong>冗余设计</strong>：确保没有单点故障</li><li><strong>自动故障转移</strong>：减少人工干预，提高可用性</li><li><strong>监控告警</strong>：及时发现问题并处理</li><li><strong>容量规划</strong>：提前规划系统扩展能力</li><li><strong>数据安全</strong>：保证数据的一致性和完整性</li></ul><p><strong>架构演进路径：</strong></p><ol><li><strong>单机架构</strong> → <strong>主从复制</strong></li><li><strong>主从复制</strong> → <strong>读写分离</strong></li><li><strong>读写分离</strong> → <strong>分库分表</strong></li><li><strong>分库分表</strong> → <strong>分布式数据库</strong></li></ol><p><strong>动手练习：</strong></p><ol><li>搭建MySQL主从复制环境，并测试故障转移</li><li>配置MHA或Orchestrator实现自动故障转移</li><li>设计并实施读写分离架构</li><li>实践分库分表方案，解决单表数据量过大的问题</li><li>实现分布式ID生成方案</li></ol><p>欢迎在评论区分享你的高可用架构实践经验和遇到的问题！</p>]]>
                    </description>
                    <pubDate>Sun, 18 May 2025 02:38:28 EDT</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[MySql入门：SQL编程与高级特性]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2950</link>
                    <description>
                            <![CDATA[<h1 id="sql%E7%BC%96%E7%A8%8B%E4%B8%8E%E9%AB%98%E7%BA%A7%E7%89%B9%E6%80%A7" tabindex="-1">SQL编程与高级特性</h1><blockquote><p>SQL不仅仅是简单的数据查询语言，它拥有强大的编程能力和高级特性。掌握这些特性可以让你写出更高效、更优雅的数据库操作代码。今天，我们将深入探讨MySQL的SQL编程能力，从基础查询到高级特性，帮助你成为SQL编程的高手。</p></blockquote><h2 id="1.-sql%E5%9F%BA%E7%A1%80%E4%B8%8E%E9%AB%98%E7%BA%A7%E6%9F%A5%E8%AF%A2" tabindex="-1">1. SQL基础与高级查询</h2><h3 id="ddl%E3%80%81dml%E3%80%81dcl%E3%80%81tcl%E5%85%A8%E9%9D%A2%E6%8E%8C%E6%8F%A1" tabindex="-1">DDL、DML、DCL、TCL全面掌握</h3><p><strong>数据定义语言（DDL） - 定义数据结构：</strong></p><pre><code class="language-sql">-- 数据库操作CREATE DATABASE company CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;ALTER DATABASE company CHARACTER SET utf8mb4;DROP DATABASE IF EXISTS old_company;-- 表操作CREATE TABLE employees (    emp_id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,    emp_name VARCHAR(100) NOT NULL,    email VARCHAR(255) UNIQUE,    salary DECIMAL(10,2) CHECK (salary &gt; 0),    dept_id INT UNSIGNED,    hire_date DATE DEFAULT (CURRENT_DATE),    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP) ENGINE=InnoDB COMMENT=&#39;员工表&#39;;-- 表结构修改ALTER TABLE employees ADD COLUMN phone VARCHAR(20) AFTER email,MODIFY COLUMN emp_name VARCHAR(150) NOT NULL,ADD INDEX idx_dept_hire (dept_id, hire_date);-- 表维护ANALYZE TABLE employees;  -- 更新统计信息OPTIMIZE TABLE employees; -- 优化表存储RENAME TABLE employees TO staff;</code></pre><p><strong>数据操作语言（DML） - 操作数据：</strong></p><pre><code class="language-sql">-- 插入数据INSERT INTO employees (emp_name, email, salary, dept_id, hire_date)VALUES     (&#39;张三&#39;, &#39;zhangsan@company.com&#39;, 8000.00, 1, &#39;2023-01-15&#39;),    (&#39;李四&#39;, &#39;lisi@company.com&#39;, 7500.00, 1, &#39;2023-02-20&#39;),    (&#39;王五&#39;, &#39;wangwu@company.com&#39;, 9000.00, 2, &#39;2023-03-10&#39;);-- 插入并忽略重复键INSERT IGNORE INTO employees (emp_name, email, salary, dept_id)VALUES (&#39;赵六&#39;, &#39;zhangsan@company.com&#39;, 8500.00, 2);  -- 邮箱重复，被忽略-- 批量插入（高效方式）INSERT INTO employees (emp_name, email, salary, dept_id)SELECT     CONCAT(&#39;员工&#39;, num) as emp_name,    CONCAT(&#39;emp&#39;, num, &#39;@company.com&#39;) as email,    5000 + (RAND() * 5000) as salary,    FLOOR(1 + RAND() * 3) as dept_idFROM (    SELECT @row := @row + 1 as num    FROM information_schema.columns c1,         information_schema.columns c2,         (SELECT @row := 0) r    LIMIT 100) numbers;-- 更新数据UPDATE employees SET salary = salary * 1.1,    updated_at = CURRENT_TIMESTAMPWHERE dept_id = 1   AND hire_date &lt; &#39;2023-06-01&#39;;-- 使用JOIN更新UPDATE employees eJOIN departments d ON e.dept_id = d.dept_idSET e.salary = e.salary * 1.05WHERE d.dept_name = &#39;技术部&#39;;-- 删除数据DELETE FROM employees WHERE emp_id = 100;-- 使用JOIN删除DELETE e FROM employees eLEFT JOIN departments d ON e.dept_id = d.dept_idWHERE d.dept_id IS NULL;  -- 删除部门不存在的员工</code></pre><p><strong>数据控制语言（DCL） - 权限管理：</strong></p><pre><code class="language-sql">-- 用户管理CREATE USER &#39;report_user&#39;@&#39;192.168.1.%&#39; IDENTIFIED BY &#39;secure_password_123&#39;;CREATE USER &#39;app_user&#39;@&#39;%&#39; IDENTIFIED WITH mysql_native_password BY &#39;app_password&#39;;-- 权限管理GRANT SELECT ON company.* TO &#39;report_user&#39;@&#39;192.168.1.%&#39;;GRANT SELECT, INSERT, UPDATE, DELETE ON company.employees TO &#39;app_user&#39;@&#39;%&#39;;GRANT EXECUTE ON PROCEDURE company.CalculateDepartmentStats TO &#39;report_user&#39;@&#39;192.168.1.%&#39;;-- 角色管理（MySQL 8.0+）CREATE ROLE data_reader;GRANT SELECT ON company.* TO data_reader;GRANT data_reader TO &#39;report_user&#39;@&#39;192.168.1.%&#39;;-- 权限回收REVOKE DELETE ON company.employees FROM &#39;app_user&#39;@&#39;%&#39;;-- 查看权限SHOW GRANTS FOR &#39;report_user&#39;@&#39;192.168.1.%&#39;;</code></pre><p><strong>事务控制语言（TCL） - 事务管理：</strong></p><pre><code class="language-sql">-- 基本事务START TRANSACTION;INSERT INTO accounts (account_id, balance) VALUES (1, 1000.00);INSERT INTO accounts (account_id, balance) VALUES (2, 2000.00);COMMIT;-- 复杂事务控制START TRANSACTION;SAVEPOINT before_transfer;UPDATE accounts SET balance = balance - 500 WHERE account_id = 1;-- 检查约束SELECT balance INTO @bal FROM accounts WHERE account_id = 1 FOR UPDATE;IF @bal &lt; 0 THEN    ROLLBACK TO SAVEPOINT before_transfer;    SIGNAL SQLSTATE &#39;45000&#39; SET MESSAGE_TEXT = &#39;余额不足&#39;;END IF;UPDATE accounts SET balance = balance + 500 WHERE account_id = 2;COMMIT;</code></pre><h3 id="%E5%A4%8D%E6%9D%82%E6%9F%A5%E8%AF%A2%EF%BC%9A%E5%AD%90%E6%9F%A5%E8%AF%A2%E3%80%81%E8%BF%9E%E6%8E%A5%E6%9F%A5%E8%AF%A2%E3%80%81%E8%81%94%E5%90%88%E6%9F%A5%E8%AF%A2" tabindex="-1">复杂查询：子查询、连接查询、联合查询</h3><p><strong>子查询深度应用：</strong></p><pre><code class="language-sql">-- 标量子查询（返回单个值）SELECT     emp_name,    salary,    (SELECT AVG(salary) FROM employees) as avg_salary,    salary - (SELECT AVG(salary) FROM employees) as diff_from_avgFROM employeesWHERE salary &gt; (SELECT AVG(salary) FROM employees);-- 列子查询（返回一列）SELECT     dept_nameFROM departmentsWHERE dept_id IN (    SELECT DISTINCT dept_id     FROM employees     WHERE salary &gt; 10000);-- 行子查询（返回一行）SELECT     emp_name,    salaryFROM employeesWHERE (salary, dept_id) = (    SELECT MAX(salary), dept_id    FROM employees    WHERE dept_id = 1);-- 表子查询（在FROM中）SELECT     dept_stats.dept_name,    dept_stats.avg_salary,    dept_stats.employee_countFROM (    SELECT         d.dept_name,        AVG(e.salary) as avg_salary,        COUNT(e.emp_id) as employee_count    FROM departments d    LEFT JOIN employees e ON d.dept_id = e.dept_id    GROUP BY d.dept_id, d.dept_name) dept_statsWHERE dept_stats.avg_salary &gt; 8000;-- 关联子查询SELECT     e1.emp_name,    e1.salary,    e1.dept_idFROM employees e1WHERE e1.salary &gt; (    SELECT AVG(e2.salary)    FROM employees e2    WHERE e2.dept_id = e1.dept_id  -- 关联外部查询);-- EXISTS子查询SELECT     d.dept_nameFROM departments dWHERE EXISTS (    SELECT 1    FROM employees e    WHERE e.dept_id = d.dept_id      AND e.salary &gt; 15000);</code></pre><p><strong>高级连接查询：</strong></p><pre><code class="language-sql">-- 内连接（INNER JOIN）SELECT     e.emp_name,    d.dept_name,    p.project_nameFROM employees eINNER JOIN departments d ON e.dept_id = d.dept_idINNER JOIN projects p ON e.dept_id = p.dept_idWHERE p.status = &#39;active&#39;;-- 左外连接（LEFT JOIN）SELECT     d.dept_name,    COUNT(e.emp_id) as employee_countFROM departments dLEFT JOIN employees e ON d.dept_id = e.dept_idGROUP BY d.dept_id, d.dept_name;-- 右外连接（RIGHT JOIN）SELECT     e.emp_name,    p.project_nameFROM employees eRIGHT JOIN project_assignments pa ON e.emp_id = pa.emp_idRIGHT JOIN projects p ON pa.project_id = p.project_id;-- 全外连接模拟（UNION + LEFT/RIGHT JOIN）SELECT     e.emp_name,    d.dept_nameFROM employees eLEFT JOIN departments d ON e.dept_id = d.dept_idUNIONSELECT     e.emp_name,    d.dept_nameFROM employees eRIGHT JOIN departments d ON e.dept_id = d.dept_id;-- 自连接（查询员工和经理）SELECT     emp.emp_name as employee_name,    mgr.emp_name as manager_nameFROM employees empLEFT JOIN employees mgr ON emp.manager_id = mgr.emp_id;-- 交叉连接（CROSS JOIN）SELECT     e.emp_name,    p.project_nameFROM employees eCROSS JOIN projects pWHERE e.dept_id = p.dept_id;-- 自然连接（NATURAL JOIN）- 不推荐在生产环境使用SELECT     emp_name,    dept_nameFROM employeesNATURAL JOIN departments;</code></pre><p><strong>联合查询与集合操作：</strong></p><pre><code class="language-sql">-- UNION（去重）SELECT     emp_name as name,    &#39;employee&#39; as type,    salaryFROM employeesWHERE salary &gt; 8000UNIONSELECT     dept_name as name,    &#39;department&#39; as type,    NULL as salaryFROM departmentsWHERE budget &gt; 100000;-- UNION ALL（不去重）SELECT     emp_name,    dept_idFROM employeesWHERE hire_date &gt;= &#39;2023-01-01&#39;UNION ALLSELECT     emp_name,    dept_idFROM former_employeesWHERE leave_date &gt;= &#39;2023-01-01&#39;;-- INTERSECT模拟（MySQL 8.0.31+ 直接支持）SELECT emp_nameFROM employeesWHERE dept_id = 1INTERSECTSELECT emp_nameFROM employeesWHERE salary &gt; 8000;-- 在旧版本中模拟INTERSECTSELECT DISTINCT e1.emp_nameFROM employees e1INNER JOIN employees e2 ON e1.emp_name = e2.emp_nameWHERE e1.dept_id = 1 AND e2.salary &gt; 8000;-- EXCEPT/MINUS模拟SELECT emp_nameFROM employeesWHERE dept_id = 1EXCEPTSELECT emp_nameFROM employeesWHERE salary &lt;= 8000;-- 在旧版本中模拟EXCEPTSELECT e1.emp_nameFROM employees e1LEFT JOIN employees e2 ON e1.emp_name = e2.emp_name AND e2.salary &lt;= 8000WHERE e1.dept_id = 1 AND e2.emp_name IS NULL;</code></pre><h3 id="%E7%AA%97%E5%8F%A3%E5%87%BD%E6%95%B0%EF%BC%9A%E6%8E%92%E5%90%8D%E3%80%81%E5%88%86%E7%BB%84%E3%80%81%E7%B4%AF%E8%AE%A1%E8%AE%A1%E7%AE%97" tabindex="-1">窗口函数：排名、分组、累计计算</h3><p><strong>排名窗口函数：</strong></p><pre><code class="language-sql">-- 基本排名SELECT     emp_name,    salary,    dept_id,    ROW_NUMBER() OVER (ORDER BY salary DESC) as rank_all,    RANK() OVER (ORDER BY salary DESC) as rank_with_ties,    DENSE_RANK() OVER (ORDER BY salary DESC) as dense_rank_no_gapsFROM employees;-- 分区排名SELECT     emp_name,    salary,    dept_id,    ROW_NUMBER() OVER (PARTITION BY dept_id ORDER BY salary DESC) as dept_rank,    RANK() OVER (PARTITION BY dept_id ORDER BY salary DESC) as dept_rank_with_tiesFROM employees;-- 前N名查询WITH ranked_employees AS (    SELECT         emp_name,        salary,        dept_id,        ROW_NUMBER() OVER (PARTITION BY dept_id ORDER BY salary DESC) as rn    FROM employees)SELECT *FROM ranked_employeesWHERE rn &lt;= 3;  -- 每个部门前3名</code></pre><p><strong>聚合窗口函数：</strong></p><pre><code class="language-sql">-- 累计计算SELECT     emp_name,    hire_date,    salary,    SUM(salary) OVER (        ORDER BY hire_date         ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW    ) as running_total,        AVG(salary) OVER (        PARTITION BY dept_id        ORDER BY hire_date        ROWS BETWEEN 2 PRECEDING AND CURRENT ROW    ) as moving_avg_3,        SUM(salary) OVER (        PARTITION BY dept_id    ) as dept_total_salaryFROM employeesORDER BY hire_date;-- 前后值访问SELECT     emp_name,    hire_date,    salary,    LAG(salary, 1) OVER (ORDER BY hire_date) as prev_salary,    LEAD(salary, 1) OVER (ORDER BY hire_date) as next_salary,    salary - LAG(salary, 1) OVER (ORDER BY hire_date) as salary_changeFROM employees;-- 首尾值访问SELECT     emp_name,    dept_id,    salary,    FIRST_VALUE(salary) OVER (        PARTITION BY dept_id         ORDER BY salary DESC    ) as highest_in_dept,        LAST_VALUE(salary) OVER (        PARTITION BY dept_id         ORDER BY salary DESC        ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING    ) as lowest_in_deptFROM employees;</code></pre><p><strong>窗口帧详解：</strong></p><pre><code class="language-sql">-- 不同的窗口帧定义SELECT     emp_name,    hire_date,    salary,    -- 从开始到当前行    SUM(salary) OVER (        ORDER BY hire_date        ROWS UNBOUNDED PRECEDING    ) as running_total,        -- 最近3行（包括当前行）    AVG(salary) OVER (        ORDER BY hire_date        ROWS BETWEEN 2 PRECEDING AND CURRENT ROW    ) as moving_avg_3,        -- 前后各1行    AVG(salary) OVER (        ORDER BY hire_date        ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING    ) as centered_avg,        -- 分组内所有行    AVG(salary) OVER (        PARTITION BY dept_id    ) as dept_avgFROM employeesORDER BY hire_date;</code></pre><h3 id="%E5%85%AC%E7%94%A8%E8%A1%A8%E8%A1%A8%E8%BE%BE%E5%BC%8F%EF%BC%88cte%EF%BC%89%E4%B8%8E%E9%80%92%E5%BD%92%E6%9F%A5%E8%AF%A2" tabindex="-1">公用表表达式（CTE）与递归查询</h3><p><strong>普通CTE：</strong></p><pre><code class="language-sql">-- 简单CTEWITH department_stats AS (    SELECT         dept_id,        COUNT(*) as employee_count,        AVG(salary) as avg_salary,        MAX(salary) as max_salary    FROM employees    GROUP BY dept_id),high_paid_employees AS (    SELECT         e.emp_name,        e.salary,        d.dept_name    FROM employees e    JOIN departments d ON e.dept_id = d.dept_id    JOIN department_stats ds ON e.dept_id = ds.dept_id    WHERE e.salary &gt; ds.avg_salary * 1.2)SELECT     dept_name,    COUNT(*) as high_paid_count,    AVG(salary) as avg_high_salaryFROM high_paid_employeesGROUP BY dept_nameORDER BY high_paid_count DESC;-- 多CTE链式使用WITH employee_data AS (    SELECT         emp_id,        emp_name,        salary,        dept_id    FROM employees    WHERE status = &#39;active&#39;),department_data AS (    SELECT         dept_id,        dept_name,        budget    FROM departments),combined_data AS (    SELECT         e.emp_name,        e.salary,        d.dept_name,        d.budget    FROM employee_data e    JOIN department_data d ON e.dept_id = d.dept_id)SELECT     dept_name,    AVG(salary) as avg_salary,    SUM(salary) / budget as salary_budget_ratioFROM combined_dataGROUP BY dept_name, budgetHAVING salary_budget_ratio &lt; 0.8;</code></pre><p><strong>递归CTE：</strong></p><pre><code class="language-sql">-- 组织结构递归查询CREATE TABLE organization (    emp_id INT PRIMARY KEY,    emp_name VARCHAR(100),    manager_id INT,    title VARCHAR(100));-- 递归CTE查询完整汇报链WITH RECURSIVE employee_hierarchy AS (    -- 锚点：顶级管理者（没有经理）    SELECT         emp_id,        emp_name,        manager_id,        title,        0 as level,        CAST(emp_name AS CHAR(1000)) as hierarchy_path    FROM organization    WHERE manager_id IS NULL        UNION ALL        -- 递归部分：下属员工    SELECT         o.emp_id,        o.emp_name,        o.manager_id,        o.title,        eh.level + 1,        CONCAT(eh.hierarchy_path, &#39; -&gt; &#39;, o.emp_name)    FROM organization o    JOIN employee_hierarchy eh ON o.manager_id = eh.emp_id)SELECT     emp_id,    emp_name,    title,    level,    hierarchy_pathFROM employee_hierarchyORDER BY hierarchy_path;-- 数字序列生成WITH RECURSIVE number_sequence AS (    SELECT 1 as num    UNION ALL    SELECT num + 1    FROM number_sequence    WHERE num &lt; 100)SELECT num FROM number_sequence;-- 日期序列生成WITH RECURSIVE date_sequence AS (    SELECT &#39;2023-01-01&#39; as date_val    UNION ALL    SELECT date_val + INTERVAL 1 DAY    FROM date_sequence    WHERE date_val &lt; &#39;2023-01-31&#39;)SELECT     date_val,    DAYNAME(date_val) as day_nameFROM date_sequence;</code></pre><h3 id="json%E5%87%BD%E6%95%B0%E4%B8%8E%E7%A9%BA%E9%97%B4%E6%95%B0%E6%8D%AE%E6%9F%A5%E8%AF%A2" tabindex="-1">JSON函数与空间数据查询</h3><p><strong>JSON函数深度应用：</strong></p><pre><code class="language-sql">-- JSON创建函数SELECT     emp_name,    salary,    JSON_OBJECT(        &#39;name&#39;, emp_name,        &#39;salary&#39;, salary,        &#39;department&#39;, dept_id,        &#39;hire_year&#39;, YEAR(hire_date)    ) as emp_jsonFROM employeesLIMIT 5;-- JSON数组操作SELECT     dept_id,    JSON_ARRAYAGG(        JSON_OBJECT(            &#39;name&#39;, emp_name,            &#39;salary&#39;, salary        )    ) as employees_jsonFROM employeesGROUP BY dept_id;-- JSON查询函数SELECT     emp_name,    JSON_EXTRACT(profile, &#39;$.contact.email&#39;) as email,    profile-&gt;&gt;&#39;$.contact.phone&#39; as phone,  -- 简写形式    JSON_UNQUOTE(JSON_EXTRACT(profile, &#39;$.address.city&#39;)) as cityFROM employeesWHERE JSON_CONTAINS_PATH(profile, &#39;one&#39;, &#39;$.skills&#39;)   AND JSON_LENGTH(profile-&gt;&#39;$.skills&#39;) &gt;= 3;-- JSON修改函数UPDATE employees SET profile = JSON_SET(    profile,    &#39;$.last_updated&#39;, CURRENT_TIMESTAMP,    &#39;$.contact.phone&#39;, &#39;+86-13800138000&#39;)WHERE emp_id = 1001;-- JSON搜索和索引SELECT     emp_name,    profile-&gt;&gt;&#39;$.title&#39; as job_titleFROM employeesWHERE JSON_SEARCH(profile, &#39;one&#39;, &#39;%经理%&#39;) IS NOT NULL;-- 创建JSON索引（MySQL 8.0.17+）CREATE TABLE products (    id INT PRIMARY KEY AUTO_INCREMENT,    product_data JSON,        -- 函数索引    INDEX idx_product_name ((CAST(product_data-&gt;&gt;&#39;$.name&#39; AS CHAR(100)))),    INDEX idx_product_price ((CAST(product_data-&gt;&gt;&#39;$.price&#39; AS DECIMAL(10,2)))));</code></pre><p><strong>空间数据查询：</strong></p><pre><code class="language-sql">-- 空间数据创建和查询CREATE TABLE locations (    location_id INT PRIMARY KEY AUTO_INCREMENT,    location_name VARCHAR(100),    coordinates POINT NOT NULL,    area_boundary POLYGON,    SPATIAL INDEX idx_coordinates (coordinates),    SPATIAL INDEX idx_area (area_boundary));-- 插入空间数据INSERT INTO locations (location_name, coordinates, area_boundary)VALUES (    &#39;公司总部&#39;,    ST_GeomFromText(&#39;POINT(116.3974 39.9093)&#39;),    ST_GeomFromText(&#39;POLYGON((116.396 39.908, 116.398 39.908, 116.398 39.910, 116.396 39.910, 116.396 39.908))&#39;));-- 空间查询SELECT     location_name,    ST_AsText(coordinates) as coordinates,    ST_X(coordinates) as longitude,    ST_Y(coordinates) as latitudeFROM locations;-- 距离计算SELECT     l1.location_name as place1,    l2.location_name as place2,    ST_Distance_Sphere(l1.coordinates, l2.coordinates) as distance_metersFROM locations l1CROSS JOIN locations l2WHERE l1.location_id != l2.location_id;-- 包含查询SELECT     location_nameFROM locationsWHERE ST_Contains(    area_boundary,     ST_GeomFromText(&#39;POINT(116.3974 39.9093)&#39;));-- 缓冲区查询SELECT     location_name,    ST_AsText(ST_Buffer(coordinates, 1000)) as buffer_zone  -- 1000米缓冲区FROM locations;</code></pre><h2 id="2.-%E5%AD%98%E5%82%A8%E8%BF%87%E7%A8%8B%E4%B8%8E%E5%87%BD%E6%95%B0" tabindex="-1">2. 存储过程与函数</h2><h3 id="%E5%AD%98%E5%82%A8%E8%BF%87%E7%A8%8B%E7%BC%96%E5%86%99%E4%B8%8E%E8%B0%83%E8%AF%95" tabindex="-1">存储过程编写与调试</h3><p><strong>基础存储过程：</strong></p><pre><code class="language-sql">-- 创建存储过程DELIMITER //CREATE PROCEDURE GetEmployeeStatistics(    IN p_dept_id INT,    OUT p_employee_count INT,    OUT p_avg_salary DECIMAL(10,2),    OUT p_max_salary DECIMAL(10,2))BEGIN    -- 声明局部变量    DECLARE v_total_budget DECIMAL(12,2);        -- 业务逻辑    SELECT         COUNT(*),        AVG(salary),        MAX(salary)    INTO         p_employee_count,        p_avg_salary,        p_max_salary    FROM employees    WHERE dept_id = p_dept_id      AND status = &#39;active&#39;;        -- 调试信息（在生产环境可注释）    SELECT CONCAT(&#39;部门 &#39;, p_dept_id, &#39; 统计完成&#39;) as debug_info;    END //DELIMITER ;-- 调用存储过程CALL GetEmployeeStatistics(1, @emp_count, @avg_sal, @max_sal);SELECT @emp_count, @avg_sal, @max_sal;</code></pre><p><strong>带条件逻辑的存储过程：</strong></p><pre><code class="language-sql">DELIMITER //CREATE PROCEDURE UpdateEmployeeSalary(    IN p_emp_id INT,    IN p_increase_percent DECIMAL(5,2),    OUT p_result VARCHAR(500))BEGIN    DECLARE v_current_salary DECIMAL(10,2);    DECLARE v_new_salary DECIMAL(10,2);    DECLARE v_emp_name VARCHAR(100);    DECLARE EXIT HANDLER FOR SQLEXCEPTION    BEGIN        GET DIAGNOSTICS CONDITION 1            @sqlstate = RETURNED_SQLSTATE,            @errno = MYSQL_ERRNO,            @text = MESSAGE_TEXT;        SET p_result = CONCAT(&#39;错误: &#39;, @errno, &#39; - &#39;, @text);        ROLLBACK;    END;        START TRANSACTION;        -- 获取当前薪资    SELECT emp_name, salary     INTO v_emp_name, v_current_salary    FROM employees     WHERE emp_id = p_emp_id    FOR UPDATE;  -- 加锁防止并发更新        IF v_current_salary IS NULL THEN        SET p_result = CONCAT(&#39;员工ID &#39;, p_emp_id, &#39; 不存在&#39;);        ROLLBACK;    ELSE        -- 计算新薪资        SET v_new_salary = v_current_salary * (1 + p_increase_percent / 100);                -- 更新薪资        UPDATE employees         SET salary = v_new_salary,            updated_at = CURRENT_TIMESTAMP        WHERE emp_id = p_emp_id;                -- 记录薪资变更历史        INSERT INTO salary_history (emp_id, old_salary, new_salary, change_date, change_reason)        VALUES (p_emp_id, v_current_salary, v_new_salary, NOW(), &#39;年度调薪&#39;);                SET p_result = CONCAT(            &#39;员工 &#39;, v_emp_name,             &#39; 薪资从 &#39;, v_current_salary,             &#39; 调整为 &#39;, v_new_salary,            &#39; (涨幅 &#39;, p_increase_percent, &#39;%)&#39;        );                COMMIT;    END IF;    END //DELIMITER ;</code></pre><p><strong>游标使用：</strong></p><pre><code class="language-sql">DELIMITER //CREATE PROCEDURE ProcessDepartmentSalaries(IN p_dept_id INT)BEGIN    DECLARE v_done INT DEFAULT 0;    DECLARE v_emp_id INT;    DECLARE v_emp_name VARCHAR(100);    DECLARE v_current_salary DECIMAL(10,2);    DECLARE v_new_salary DECIMAL(10,2);        -- 声明游标    DECLARE emp_cursor CURSOR FOR        SELECT emp_id, emp_name, salary        FROM employees        WHERE dept_id = p_dept_id          AND status = &#39;active&#39;;        -- 声明结束处理程序    DECLARE CONTINUE HANDLER FOR NOT FOUND SET v_done = 1;        -- 创建临时表存储结果    CREATE TEMPORARY TABLE IF NOT EXISTS salary_adjustments (        emp_id INT,        emp_name VARCHAR(100),        old_salary DECIMAL(10,2),        new_salary DECIMAL(10,2),        increase_amount DECIMAL(10,2)    );        OPEN emp_cursor;        emp_loop: LOOP        FETCH emp_cursor INTO v_emp_id, v_emp_name, v_current_salary;        IF v_done THEN            LEAVE emp_loop;        END IF;                -- 业务逻辑：根据规则调整薪资        IF v_current_salary &lt; 5000 THEN            SET v_new_salary = v_current_salary * 1.15;  -- 低薪员工涨15%        ELSEIF v_current_salary BETWEEN 5000 AND 10000 THEN            SET v_new_salary = v_current_salary * 1.10;  -- 中等薪资涨10%        ELSE            SET v_new_salary = v_current_salary * 1.05;  -- 高薪员工涨5%        END IF;                -- 更新薪资        UPDATE employees         SET salary = v_new_salary        WHERE emp_id = v_emp_id;                -- 记录调整结果        INSERT INTO salary_adjustments         VALUES (v_emp_id, v_emp_name, v_current_salary, v_new_salary, v_new_salary - v_current_salary);            END LOOP;        CLOSE emp_cursor;        -- 返回处理结果    SELECT * FROM salary_adjustments;        DROP TEMPORARY TABLE salary_adjustments;    END //DELIMITER ;</code></pre><h3 id="%E8%87%AA%E5%AE%9A%E4%B9%89%E5%87%BD%E6%95%B0%E5%BC%80%E5%8F%91" tabindex="-1">自定义函数开发</h3><p><strong>标量函数：</strong></p><pre><code class="language-sql">DELIMITER //CREATE FUNCTION CalculateTax(    p_salary DECIMAL(10,2),    p_tax_rate DECIMAL(5,3)) RETURNS DECIMAL(10,2)DETERMINISTICREADS SQL DATABEGIN    DECLARE v_tax_amount DECIMAL(10,2);        -- 计算税费（简单的线性计算）    SET v_tax_amount = p_salary * p_tax_rate;        -- 确保不为负数    IF v_tax_amount &lt; 0 THEN        SET v_tax_amount = 0;    END IF;        RETURN v_tax_amount;END //DELIMITER ;-- 使用自定义函数SELECT     emp_name,    salary,    CalculateTax(salary, 0.1) as tax_amount,    salary - CalculateTax(salary, 0.1) as net_salaryFROM employees;</code></pre><p><strong>字符串处理函数：</strong></p><pre><code class="language-sql">DELIMITER //CREATE FUNCTION FormatPhoneNumber(    p_phone VARCHAR(20))RETURNS VARCHAR(20)DETERMINISTICBEGIN    DECLARE v_clean_phone VARCHAR(20);        -- 移除所有非数字字符    SET v_clean_phone = REGEXP_REPLACE(p_phone, &#39;[^0-9]&#39;, &#39;&#39;);        -- 格式化手机号    IF LENGTH(v_clean_phone) = 11 THEN        RETURN CONCAT(            SUBSTR(v_clean_phone, 1, 3), &#39;-&#39;,            SUBSTR(v_clean_phone, 4, 4), &#39;-&#39;,            SUBSTR(v_clean_phone, 8, 4)        );    ELSE        RETURN p_phone;  -- 无法格式化，返回原值    END IF;END //DELIMITER ;</code></pre><p><strong>复杂业务逻辑函数：</strong></p><pre><code class="language-sql">DELIMITER //CREATE FUNCTION GetEmployeeLevel(    p_salary DECIMAL(10,2),    p_hire_date DATE,    p_performance_rating INT)RETURNS VARCHAR(20)DETERMINISTICBEGIN    DECLARE v_years_worked INT;    DECLARE v_level_score DECIMAL(5,2);        -- 计算工作年限    SET v_years_worked = TIMESTAMPDIFF(YEAR, p_hire_date, CURDATE());        -- 计算级别分数（薪资权重40%，年限权重30%，绩效权重30%）    SET v_level_score =         (p_salary / 10000) * 0.4 +          -- 每万元0.4分        (v_years_worked * 0.3) +            -- 每年0.3分        (p_performance_rating * 0.3);       -- 绩效评分权重        -- 根据分数确定级别    RETURN CASE         WHEN v_level_score &gt;= 8 THEN &#39;专家&#39;        WHEN v_level_score &gt;= 6 THEN &#39;高级&#39;        WHEN v_level_score &gt;= 4 THEN &#39;中级&#39;        ELSE &#39;初级&#39;    END;    END //DELIMITER ;</code></pre><h3 id="%E8%A7%A6%E5%8F%91%E5%99%A8%E8%AE%BE%E8%AE%A1%E4%B8%8E%E5%BA%94%E7%94%A8%E5%9C%BA%E6%99%AF" tabindex="-1">触发器设计与应用场景</h3><p><strong>审计触发器：</strong></p><pre><code class="language-sql">-- 创建审计表CREATE TABLE employee_audit (    audit_id INT AUTO_INCREMENT PRIMARY KEY,    action_type ENUM(&#39;INSERT&#39;, &#39;UPDATE&#39;, &#39;DELETE&#39;),    emp_id INT,    old_data JSON,    new_data JSON,    changed_by VARCHAR(100),    changed_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP);-- 员工表更新触发器DELIMITER //CREATE TRIGGER before_employee_updateBEFORE UPDATE ON employeesFOR EACH ROWBEGIN    DECLARE v_changes JSON DEFAULT JSON_OBJECT();        -- 检查哪些字段被修改了    IF OLD.emp_name != NEW.emp_name THEN        SET v_changes = JSON_SET(v_changes, &#39;$.emp_name&#39;, JSON_OBJECT(            &#39;old&#39;, OLD.emp_name,            &#39;new&#39;, NEW.emp_name        ));    END IF;        IF OLD.salary != NEW.salary THEN        SET v_changes = JSON_SET(v_changes, &#39;$.salary&#39;, JSON_OBJECT(            &#39;old&#39;, OLD.salary,            &#39;new&#39;, NEW.salary        ));    END IF;        IF OLD.dept_id != NEW.dept_id THEN        SET v_changes = JSON_SET(v_changes, &#39;$.dept_id&#39;, JSON_OBJECT(            &#39;old&#39;, OLD.dept_id,            &#39;new&#39;, NEW.dept_id        ));    END IF;        -- 如果有变化，记录审计日志    IF JSON_LENGTH(v_changes) &gt; 0 THEN        INSERT INTO employee_audit (action_type, emp_id, old_data, new_data, changed_by)        VALUES (            &#39;UPDATE&#39;,            OLD.emp_id,            JSON_OBJECT(                &#39;emp_name&#39;, OLD.emp_name,                &#39;salary&#39;, OLD.salary,                &#39;dept_id&#39;, OLD.dept_id            ),            v_changes,            USER()        );    END IF;    END //DELIMITER ;</code></pre><p><strong>数据一致性触发器：</strong></p><pre><code class="language-sql">-- 部门预算检查触发器DELIMITER //CREATE TRIGGER before_department_updateBEFORE UPDATE ON departmentsFOR EACH ROWBEGIN    DECLARE v_total_salary DECIMAL(12,2);        -- 计算部门总薪资    SELECT COALESCE(SUM(salary), 0)    INTO v_total_salary    FROM employees    WHERE dept_id = NEW.dept_id      AND status = &#39;active&#39;;        -- 检查预算是否足够    IF NEW.budget &lt; v_total_salary THEN        SIGNAL SQLSTATE &#39;45000&#39;         SET MESSAGE_TEXT = &#39;部门预算不能低于员工总薪资&#39;;    END IF;    END //DELIMITER ;</code></pre><p><strong>派生数据触发器：</strong></p><pre><code class="language-sql">-- 维护部门统计信息的触发器DELIMITER //CREATE TRIGGER after_employee_changeAFTER INSERT OR UPDATE OR DELETE ON employeesFOR EACH ROWBEGIN    DECLARE affected_dept_id INT;        -- 确定受影响的部门    IF INSERTING THEN        SET affected_dept_id = NEW.dept_id;    ELSEIF UPDATING THEN        -- 如果部门变更，两个部门都受影响        IF OLD.dept_id != NEW.dept_id THEN            CALL UpdateDepartmentStats(OLD.dept_id);            SET affected_dept_id = NEW.dept_id;        ELSE            SET affected_dept_id = NEW.dept_id;        END IF;    ELSE  -- DELETING        SET affected_dept_id = OLD.dept_id;    END IF;        -- 更新部门统计    CALL UpdateDepartmentStats(affected_dept_id);    END //DELIMITER ;</code></pre><h3 id="%E4%BA%8B%E4%BB%B6%E8%B0%83%E5%BA%A6%E5%99%A8%E5%AE%9E%E7%8E%B0%E5%AE%9A%E6%97%B6%E4%BB%BB%E5%8A%A1" tabindex="-1">事件调度器实现定时任务</h3><p><strong>基础事件调度：</strong></p><pre><code class="language-sql">-- 启用事件调度器SET GLOBAL event_scheduler = ON;-- 创建每日统计事件DELIMITER //CREATE EVENT daily_department_statsON SCHEDULE EVERY 1 DAYSTARTS &#39;2023-01-01 02:00:00&#39;COMMENT &#39;每日部门统计&#39;DOBEGIN    -- 防止事件重叠执行    DECLARE EXIT HANDLER FOR SQLEXCEPTION    BEGIN        INSERT INTO event_errors (event_name, error_message, occurred_at)        VALUES (&#39;daily_department_stats&#39;, &#39;执行失败&#39;, NOW());    END;        -- 更新部门统计表    REPLACE INTO department_daily_stats (stat_date, dept_id, employee_count, total_salary, avg_salary)    SELECT         CURDATE() as stat_date,        dept_id,        COUNT(*) as employee_count,        SUM(salary) as total_salary,        AVG(salary) as avg_salary    FROM employees    WHERE status = &#39;active&#39;    GROUP BY dept_id;        -- 记录执行日志    INSERT INTO event_logs (event_name, executed_at, records_affected)    VALUES (&#39;daily_department_stats&#39;, NOW(), ROW_COUNT());    END //DELIMITER ;</code></pre><p><strong>复杂定时任务：</strong></p><pre><code class="language-sql">DELIMITER //CREATE EVENT monthly_salary_reportON SCHEDULE     EVERY 1 MONTH    STARTS TIMESTAMP(DATE_FORMAT(NOW() + INTERVAL 1 MONTH, &#39;%Y-%m-01 03:00:00&#39;))COMMENT &#39;月度薪资报告&#39;DOBEGIN    DECLARE v_report_month DATE;    DECLARE v_previous_month DATE;        -- 设置报告月份（上个月）    SET v_report_month = DATE_FORMAT(NOW() - INTERVAL 1 MONTH, &#39;%Y-%m-01&#39;);    SET v_previous_month = v_report_month - INTERVAL 1 MONTH;        -- 创建月度薪资报告    INSERT INTO monthly_salary_reports (report_month, dept_id, employee_count, total_salary, avg_salary, salary_growth)    SELECT         v_report_month as report_month,        dept_id,        COUNT(*) as employee_count,        SUM(salary) as total_salary,        AVG(salary) as avg_salary,        -- 计算薪资增长率        (AVG(salary) - COALESCE(            (SELECT avg_salary              FROM monthly_salary_reports              WHERE report_month = v_previous_month                AND dept_id = e.dept_id),            AVG(salary)        )) / COALESCE(            (SELECT avg_salary              FROM monthly_salary_reports              WHERE report_month = v_previous_month                AND dept_id = e.dept_id),            AVG(salary)        ) * 100 as salary_growth_percent    FROM employees e    WHERE status = &#39;active&#39;    GROUP BY dept_id;        -- 生成高管报告    INSERT INTO executive_reports (report_month, report_type, report_data)    SELECT         v_report_month,        &#39;salary_analysis&#39;,        JSON_OBJECT(            &#39;total_employees&#39;, (SELECT COUNT(*) FROM employees WHERE status = &#39;active&#39;),            &#39;total_payroll&#39;, (SELECT SUM(salary) FROM employees WHERE status = &#39;active&#39;),            &#39;department_breakdown&#39;, (                SELECT JSON_ARRAYAGG(                    JSON_OBJECT(                        &#39;dept_id&#39;, dept_id,                        &#39;dept_name&#39;, dept_name,                        &#39;employee_count&#39;, employee_count,                        &#39;avg_salary&#39;, avg_salary                    )                )                FROM monthly_salary_reports                WHERE report_month = v_report_month            )        )    FROM dual;    END //DELIMITER ;</code></pre><p><strong>事件管理：</strong></p><pre><code class="language-sql">-- 查看事件状态SHOW EVENTS;-- 查看事件定义SHOW CREATE EVENT monthly_salary_report;-- 修改事件ALTER EVENT monthly_salary_reportON SCHEDULE EVERY 1 MONTHSTARTS CURRENT_TIMESTAMP + INTERVAL 1 DAYENABLE;-- 暂停事件ALTER EVENT monthly_salary_report DISABLE;-- 删除事件DROP EVENT IF EXISTS monthly_salary_report;-- 事件监控SELECT     event_schema as database_name,    event_name,    definer,    time_zone,    event_definition as sql_code,    execute_at,    interval_value,    interval_field,    created,    last_altered,    statusFROM information_schema.eventsWHERE event_schema = &#39;company&#39;;</code></pre><h2 id="3.-%E4%BA%8B%E5%8A%A1%E4%B8%8E%E5%B9%B6%E5%8F%91%E6%8E%A7%E5%88%B6" tabindex="-1">3. 事务与并发控制</h2><h3 id="acid%E7%89%B9%E6%80%A7%E6%B7%B1%E5%BA%A6%E7%90%86%E8%A7%A3" tabindex="-1">ACID特性深度理解</h3><p><strong>原子性（Atomicity）实现：</strong></p><pre><code class="language-sql">-- 银行转账事务 - 原子性示例START TRANSACTION;-- 检查账户余额SELECT balance INTO @current_balance FROM accounts WHERE account_id = 123 FOR UPDATE;IF @current_balance &lt; 1000 THEN    -- 余额不足，回滚事务    ROLLBACK;    SIGNAL SQLSTATE &#39;45000&#39; SET MESSAGE_TEXT = &#39;余额不足&#39;;END IF;-- 扣款UPDATE accounts SET balance = balance - 1000 WHERE account_id = 123;-- 存款UPDATE accounts SET balance = balance + 1000 WHERE account_id = 456;-- 记录交易INSERT INTO transactions (from_account, to_account, amount, transaction_time)VALUES (123, 456, 1000, NOW());COMMIT;</code></pre><p><strong>一致性（Consistency）保证：</strong></p><pre><code class="language-sql">-- 使用约束保证一致性CREATE TABLE orders (    order_id INT AUTO_INCREMENT PRIMARY KEY,    customer_id INT NOT NULL,    order_amount DECIMAL(10,2) NOT NULL CHECK (order_amount &gt; 0),    order_date DATE NOT NULL,    status ENUM(&#39;pending&#39;, &#39;confirmed&#39;, &#39;shipped&#39;, &#39;delivered&#39;) DEFAULT &#39;pending&#39;,        -- 外键约束    FOREIGN KEY (customer_id) REFERENCES customers(customer_id),        -- 检查约束（MySQL 8.0.16+）    CONSTRAINT chk_order_date CHECK (order_date &gt;= &#39;2020-01-01&#39;));-- 事务中的一致性检查START TRANSACTION;-- 业务逻辑检查SELECT COUNT(*) INTO @active_productsFROM products WHERE product_id IN (SELECT product_id FROM order_items WHERE order_id = 1001)  AND status = &#39;active&#39;;IF @active_products = 0 THEN    ROLLBACK;    SIGNAL SQLSTATE &#39;45000&#39; SET MESSAGE_TEXT = &#39;订单中没有有效商品&#39;;END IF;-- 继续其他操作...COMMIT;</code></pre><h3 id="%E4%BA%8B%E5%8A%A1%E9%9A%94%E7%A6%BB%E7%BA%A7%E5%88%AB%E4%B8%8E%E5%AE%9E%E7%8E%B0%E5%8E%9F%E7%90%86" tabindex="-1">事务隔离级别与实现原理</h3><p><strong>隔离级别对比：</strong></p><pre><code class="language-sql">-- 查看当前隔离级别SELECT @@transaction_isolation;-- 设置会话隔离级别SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;-- 不同隔离级别的现象演示-- 1. 读未提交（READ UNCOMMITTED） - 脏读-- 事务ASET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;START TRANSACTION;UPDATE accounts SET balance = 2000 WHERE account_id = 1;  -- 未提交-- 事务B（在另一个连接中）SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;START TRANSACTION;SELECT balance FROM accounts WHERE account_id = 1;  -- 可能读到2000（脏读）-- 2. 读已提交（READ COMMITTED） - 避免脏读，但不可重复读SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;-- 3. 可重复读（REPEATABLE READ） - MySQL默认级别SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ;-- 4. 串行化（SERIALIZABLE） - 最高隔离级别SET SESSION TRANSACTION ISOLATION LEVEL SERIALIZABLE;</code></pre><p><strong>隔离级别实战：</strong></p><pre><code class="language-sql">-- 可重复读级别下的幻读问题-- 事务ASTART TRANSACTION;SELECT COUNT(*) FROM employees WHERE salary &gt; 8000;  -- 假设返回10-- 事务B（在另一个连接中插入新员工）START TRANSACTION;INSERT INTO employees (emp_name, salary, dept_id) VALUES (&#39;新员工&#39;, 9000, 1);COMMIT;-- 事务A再次查询SELECT COUNT(*) FROM employees WHERE salary &gt; 8000;  -- 在REPEATABLE READ中仍然返回10-- 串行化级别解决幻读SET SESSION TRANSACTION ISOLATION LEVEL SERIALIZABLE;START TRANSACTION;SELECT COUNT(*) FROM employees WHERE salary &gt; 8000;  -- 加锁，阻止其他事务插入-- 事务B的插入会被阻塞，直到事务A提交</code></pre><h3 id="mvcc%E5%A4%9A%E7%89%88%E6%9C%AC%E5%B9%B6%E5%8F%91%E6%8E%A7%E5%88%B6%E6%9C%BA%E5%88%B6" tabindex="-1">MVCC多版本并发控制机制</h3><p><strong>MVCC原理演示：</strong></p><pre><code class="language-sql">-- 查看InnoDB事务信息SELECT * FROM information_schema.INNODB_TRX;-- MVCC示例-- 事务ASTART TRANSACTION;SELECT * FROM employees WHERE emp_id = 1;  -- 读取当前版本-- 事务B修改同一条记录START TRANSACTION;UPDATE employees SET salary = salary + 1000 WHERE emp_id = 1;COMMIT;-- 事务A再次读取（REPEATABLE READ级别下看到的是旧版本）SELECT * FROM employees WHERE emp_id = 1;  -- 薪资未变化-- 提交事务A后看到新版本COMMIT;SELECT * FROM employees WHERE emp_id = 1;  -- 看到更新后的薪资</code></pre><p><strong>MVCC与Undo Log：</strong></p><pre><code class="language-sql">-- Undo Log维护多版本数据-- 当执行UPDATE时：-- 1. 将旧数据复制到Undo Log-- 2. 修改当前数据-- 3. 更新DB_ROLL_PTR指向Undo Log中的旧版本-- 长事务对Undo Log的影响SELECT     t.trx_id,    t.trx_started,    TIMESTAMPDIFF(SECOND, t.trx_started, NOW()) as duration_seconds,    t.trx_state,    t.trx_operation_stateFROM information_schema.INNODB_TRX tORDER BY t.trx_started;-- 监控Undo Log使用SHOW ENGINE INNODB STATUS;</code></pre><h3 id="%E6%AD%BB%E9%94%81%E6%A3%80%E6%B5%8B%E4%B8%8E%E9%81%BF%E5%85%8D%E7%AD%96%E7%95%A5" tabindex="-1">死锁检测与避免策略</h3><p><strong>死锁场景分析：</strong></p><pre><code class="language-sql">-- 死锁示例-- 事务1START TRANSACTION;UPDATE accounts SET balance = balance - 100 WHERE account_id = 1;  -- 锁住账户1-- 事务2START TRANSACTION;UPDATE accounts SET balance = balance - 200 WHERE account_id = 2;  -- 锁住账户2-- 事务1尝试锁住账户2UPDATE accounts SET balance = balance + 100 WHERE account_id = 2;  -- 等待事务2释放锁-- 事务2尝试锁住账户1UPDATE accounts SET balance = balance + 200 WHERE account_id = 1;  -- 等待事务1释放锁，死锁！-- InnoDB会自动检测到死锁并回滚其中一个事务</code></pre><p><strong>死锁避免策略：</strong></p><pre><code class="language-sql">-- 1. 按固定顺序访问资源-- 好的做法：总是先访问ID小的账户START TRANSACTION;UPDATE accounts SET balance = balance - 100 WHERE account_id = LEAST(1, 2);UPDATE accounts SET balance = balance + 100 WHERE account_id = GREATEST(1, 2);COMMIT;-- 2. 使用锁超时SET SESSION innodb_lock_wait_timeout = 10;  -- 设置锁等待超时为10秒-- 3. 使用NOWAIT和SKIP LOCKED（MySQL 8.0+）START TRANSACTION;SELECT * FROM accounts WHERE account_id = 1 FOR UPDATE NOWAIT;  -- 如果锁被占用立即报错SELECT * FROM accounts WHERE account_id = 1 FOR UPDATE SKIP LOCKED;  -- 跳过被锁定的行-- 4. 减少事务大小和时间START TRANSACTION;-- 尽快完成事务，减少锁持有时间UPDATE accounts SET balance = balance - 100 WHERE account_id = 1;UPDATE accounts SET balance = balance + 100 WHERE account_id = 2;COMMIT;  -- 立即提交-- 死锁信息分析SHOW ENGINE INNODB STATUS;  -- 查看最近的死锁信息</code></pre><h3 id="%E5%88%86%E5%B8%83%E5%BC%8F%E4%BA%8B%E5%8A%A1%EF%BC%88xa%E4%BA%8B%E5%8A%A1%EF%BC%89%E5%AE%9E%E6%88%98" tabindex="-1">分布式事务（XA事务）实战</h3><p><strong>XA事务基础：</strong></p><pre><code class="language-sql">-- XA事务示例（跨多个数据库）-- 第一阶段：准备阶段XA START &#39;xid1&#39;;  -- 开始XA事务UPDATE accounts SET balance = balance - 1000 WHERE account_id = 123;UPDATE accounts SET balance = balance + 1000 WHERE account_id = 456;XA END &#39;xid1&#39;;XA PREPARE &#39;xid1&#39;;  -- 准备提交-- 第二阶段：提交阶段XA COMMIT &#39;xid1&#39;;   -- 提交事务-- 或者回滚-- XA ROLLBACK &#39;xid1&#39;;-- 恢复中断的XA事务XA RECOVER;  -- 查看PREPARED状态的XA事务-- 对于PREPARED状态的事务，可以决定提交或回滚XA COMMIT &#39;xid_recovered&#39;;-- 或者 XA ROLLBACK &#39;xid_recovered&#39;;</code></pre><p><strong>分布式事务管理：</strong></p><pre><code class="language-sql">-- 监控XA事务SELECT * FROM performance_schema.events_transactions_currentWHERE STATE = &#39;PREPARED&#39;;-- XA事务错误处理DELIMITER //CREATE PROCEDURE DistributedTransfer(    IN p_from_account INT,    IN p_to_account INT,     IN p_amount DECIMAL(10,2))BEGIN    DECLARE EXIT HANDLER FOR SQLEXCEPTION    BEGIN        -- 回滚XA事务        XA END &#39;transfer_xid&#39;;        XA ROLLBACK &#39;transfer_xid&#39;;        RESIGNAL;    END;        -- 开始XA事务    XA START &#39;transfer_xid&#39;;        -- 业务操作    UPDATE accounts SET balance = balance - p_amount     WHERE account_id = p_from_account;        UPDATE accounts SET balance = balance + p_amount     WHERE account_id = p_to_account;        -- 结束并准备    XA END &#39;transfer_xid&#39;;    XA PREPARE &#39;transfer_xid&#39;;        -- 检查所有参与者是否准备成功    -- 这里可以添加其他数据库的检查逻辑        -- 提交事务    XA COMMIT &#39;transfer_xid&#39;;    END //DELIMITER ;</code></pre><h2 id="%E6%80%BB%E7%BB%93" tabindex="-1">总结</h2><p>通过本篇的深入学习，我们掌握了MySQL SQL编程的核心高级特性：</p><ol><li><strong>复杂查询能力</strong>：子查询、连接查询、窗口函数、CTE递归查询</li><li><strong>存储程序开发</strong>：存储过程、函数、触发器、事件调度器</li><li><strong>事务管理</strong>：ACID特性、隔离级别、MVCC机制、死锁处理</li><li><strong>高级数据类型</strong>：JSON处理、空间数据查询</li><li><strong>分布式事务</strong>：XA事务管理和恢复</li></ol><p><strong>关键收获：</strong></p><ul><li>窗口函数可以优雅地解决复杂的分析需求</li><li>存储过程和函数封装了业务逻辑，提高代码复用性</li><li>合理使用事务隔离级别可以平衡性能和数据一致性</li><li>MVCC机制是MySQL高并发的基础</li><li>分布式事务保证了跨数据库操作的一致性</li></ul><p>这些高级特性使得MySQL能够处理复杂的企业级应用场景，为构建高性能、高可用的系统提供了坚实基础。</p><p><strong>动手练习：</strong></p><ol><li>使用窗口函数分析你业务数据的排名和趋势</li><li>编写存储过程实现复杂的业务逻辑</li><li>设计触发器实现数据变更的审计跟踪</li><li>配置事件调度器实现定时数据维护任务</li><li>测试不同事务隔离级别对并发性能的影响</li></ol><p>欢迎在评论区分享你的SQL编程经验和遇到的问题！</p>]]>
                    </description>
                    <pubDate>Tue, 13 May 2025 07:58:25 EDT</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[MySql入门：性能优化与调优]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2951</link>
                    <description>
                            <![CDATA[<h1 id="mysql%E6%80%A7%E8%83%BD%E4%BC%98%E5%8C%96%E4%B8%8E%E8%B0%83%E4%BC%98" tabindex="-1">MySQL性能优化与调优</h1><blockquote><p>数据库性能是系统整体性能的基石。一个优化良好的MySQL数据库可以轻松应对高并发场景，而配置不当的数据库则可能成为整个系统的瓶颈。今天，我们将深入探讨MySQL性能优化的各个方面，从查询优化到服务器配置，帮助你构建高性能的数据库系统。</p></blockquote><h2 id="1.-%E6%9F%A5%E8%AF%A2%E6%80%A7%E8%83%BD%E4%BC%98%E5%8C%96" tabindex="-1">1. 查询性能优化</h2><h3 id="explain%E6%89%A7%E8%A1%8C%E8%AE%A1%E5%88%92%E6%B7%B1%E5%BA%A6%E8%A7%A3%E8%AF%BB" tabindex="-1">EXPLAIN执行计划深度解读</h3><p><strong>EXPLAIN输出详解：</strong></p><pre><code class="language-sql">-- 基本EXPLAIN使用EXPLAIN SELECT e.emp_name, d.dept_name, p.project_nameFROM employees eJOIN departments d ON e.dept_id = d.dept_idJOIN projects p ON e.emp_id = p.manager_idWHERE e.salary &gt; 5000  AND d.location = &#39;北京&#39;ORDER BY e.hire_date DESCLIMIT 100;-- EXPLAIN输出字段解析CREATE TABLE explain_analysis (    id INT PRIMARY KEY,    select_type VARCHAR(50) COMMENT &#39;查询类型&#39;,    table_name VARCHAR(64) COMMENT &#39;表名&#39;,    partitions VARCHAR(255) COMMENT &#39;匹配的分区&#39;,    type VARCHAR(30) COMMENT &#39;连接类型&#39;,    possible_keys VARCHAR(255) COMMENT &#39;可能使用的索引&#39;,    key VARCHAR(255) COMMENT &#39;实际使用的索引&#39;,    key_len INT COMMENT &#39;使用的索引长度&#39;,    ref VARCHAR(255) COMMENT &#39;与索引比较的列&#39;,    rows INT COMMENT &#39;估计要检查的行数&#39;,    filtered DECIMAL(5,2) COMMENT &#39;按条件过滤的行百分比&#39;,    extra VARCHAR(255) COMMENT &#39;额外信息&#39;);</code></pre><p><strong>执行计划类型深度分析：</strong></p><pre><code class="language-sql">-- 不同类型的执行计划示例-- 1. system &amp; const（最优）EXPLAIN SELECT * FROM departments WHERE dept_id = 1;-- type: const，通过主键或唯一索引查找-- 2. eq_ref（优秀）EXPLAIN SELECT * FROM employees e JOIN departments d ON e.dept_id = d.dept_id;-- type: eq_ref，对于前表的每一行，后表只有一行匹配-- 3. ref（良好）EXPLAIN SELECT * FROM employees WHERE dept_id = 1;-- type: ref，使用非唯一索引查找-- 4. range（良好）EXPLAIN SELECT * FROM employees WHERE hire_date BETWEEN &#39;2020-01-01&#39; AND &#39;2023-01-01&#39;;-- type: range，索引范围扫描-- 5. index（中等）EXPLAIN SELECT dept_id FROM employees;-- type: index，全索引扫描-- 6. ALL（最差）EXPLAIN SELECT * FROM employees WHERE salary &gt; 5000;-- type: ALL，全表扫描（如果没有合适的索引）</code></pre><p><strong>EXPLAIN FORMAT=JSON深度分析：</strong></p><pre><code class="language-sql">-- 使用JSON格式获取更详细的执行计划EXPLAIN FORMAT=JSONSELECT     e.emp_name,    d.dept_name,    COUNT(p.project_id) as project_countFROM employees eJOIN departments d ON e.dept_id = d.dept_idLEFT JOIN projects p ON e.emp_id = p.manager_idWHERE e.hire_date &gt;= &#39;2020-01-01&#39;  AND d.budget &gt; 100000GROUP BY e.emp_id, e.emp_name, d.dept_nameHAVING project_count &gt;= 2ORDER BY e.salary DESC;-- 解析JSON执行计划的关键信息SELECT     JSON_EXTRACT(        (EXPLAIN FORMAT=JSON          SELECT * FROM employees WHERE dept_id = 1),        &#39;$.query_block.cost_info.query_cost&#39;    ) as query_cost;</code></pre><h3 id="%E6%85%A2%E6%9F%A5%E8%AF%A2%E6%97%A5%E5%BF%97%E5%88%86%E6%9E%90%E4%B8%8E%E4%BC%98%E5%8C%96" tabindex="-1">慢查询日志分析与优化</h3><p><strong>慢查询配置：</strong></p><pre><code class="language-sql">-- 查看慢查询配置SHOW VARIABLES LIKE &#39;slow_query_log%&#39;;SHOW VARIABLES LIKE &#39;long_query_time&#39;;SHOW VARIABLES LIKE &#39;min_examined_row_limit&#39;;SHOW VARIABLES LIKE &#39;log_queries_not_using_indexes&#39;;-- 动态配置慢查询（无需重启）SET GLOBAL slow_query_log = 1;SET GLOBAL long_query_time = 1.0;  -- 1秒SET GLOBAL min_examined_row_limit = 100;SET GLOBAL log_queries_not_using_indexes = 1;-- 永久配置（在my.cnf中）/*[mysqld]slow_query_log = 1slow_query_log_file = /var/log/mysql/slow.loglong_query_time = 1.0log_queries_not_using_indexes = 1min_examined_row_limit = 100log_slow_admin_statements = 1*/</code></pre><p><strong>慢查询日志分析：</strong></p><pre><code class="language-sql">-- 使用pt-query-digest分析慢查询日志（外部工具）-- pt-query-digest /var/log/mysql/slow.log &gt; slow_report.txt-- 使用MySQL自身分析慢查询CREATE TABLE slow_log_analysis (    query_time DECIMAL(10,6),    lock_time DECIMAL(10,6),    rows_sent INT,    rows_examined INT,    db VARCHAR(512),    query TEXT,    timestamp TIMESTAMP);-- 将慢查询日志导入表中分析LOAD DATA INFILE &#39;/var/log/mysql/slow.log&#39;INTO TABLE slow_log_analysisFIELDS TERMINATED BY &#39;\t&#39;LINES TERMINATED BY &#39;\n&#39;;-- 分析慢查询模式SELECT     LEFT(query, 100) as query_sample,    COUNT(*) as query_count,    AVG(query_time) as avg_time,    AVG(rows_examined) as avg_rows_examined,    AVG(rows_sent) as avg_rows_sentFROM slow_log_analysisGROUP BY LEFT(query, 100)ORDER BY avg_time DESCLIMIT 10;</code></pre><p><strong>常见慢查询模式及优化：</strong></p><pre><code class="language-sql">-- 1. 全表扫描优化-- 慢查询SELECT * FROM employees WHERE YEAR(hire_date) = 2023;-- 优化后SELECT * FROM employees WHERE hire_date BETWEEN &#39;2023-01-01&#39; AND &#39;2023-12-31&#39;;-- 添加索引：ALTER TABLE employees ADD INDEX idx_hire_date (hire_date);-- 2. OR条件优化-- 慢查询SELECT * FROM employees WHERE dept_id = 1 OR dept_id = 2 OR salary &gt; 10000;-- 优化后SELECT * FROM employees WHERE dept_id IN (1, 2)UNIONSELECT * FROM employees WHERE salary &gt; 10000;-- 3. 分页优化-- 慢查询（偏移量大时）SELECT * FROM employees ORDER BY emp_id LIMIT 10000, 20;-- 优化后SELECT * FROM employees WHERE emp_id &gt; 10000 ORDER BY emp_id LIMIT 20;-- 4. LIKE优化-- 慢查询SELECT * FROM employees WHERE emp_name LIKE &#39;%张%&#39;;-- 优化方案-- 使用全文索引或添加反转索引ALTER TABLE employees ADD FULLTEXT idx_name_ft (emp_name);SELECT * FROM employees WHERE MATCH(emp_name) AGAINST(&#39;张&#39; IN BOOLEAN MODE);</code></pre><h3 id="%E6%9F%A5%E8%AF%A2%E9%87%8D%E5%86%99%E6%8A%80%E5%B7%A7%E4%B8%8E%E6%A8%A1%E5%BC%8F" tabindex="-1">查询重写技巧与模式</h3><p><strong>查询重写模式：</strong></p><pre><code class="language-sql">-- 1. 使用EXISTS替代IN（当子查询数据量大时）-- 原始查询SELECT * FROM employees WHERE dept_id IN (SELECT dept_id FROM departments WHERE budget &gt; 100000);-- 优化后SELECT e.* FROM employees eWHERE EXISTS (    SELECT 1 FROM departments d     WHERE d.dept_id = e.dept_id AND d.budget &gt; 100000);-- 2. 使用JOIN替代子查询-- 原始查询SELECT emp_name,        (SELECT dept_name FROM departments WHERE dept_id = employees.dept_id) as dept_nameFROM employees;-- 优化后SELECT e.emp_name, d.dept_nameFROM employees eLEFT JOIN departments d ON e.dept_id = d.dept_id;-- 3. 避免SELECT *-- 原始查询SELECT * FROM employees WHERE dept_id = 1;-- 优化后SELECT emp_id, emp_name, email, salary FROM employees WHERE dept_id = 1;-- 4. 使用批处理替代循环-- 不好的做法（在应用程序中循环）-- FOR each id IN ids: --   UPDATE employees SET status = 1 WHERE emp_id = id-- 好的做法UPDATE employees SET status = 1 WHERE emp_id IN (1, 2, 3, 4, 5);</code></pre><p><strong>高级查询重写：</strong></p><pre><code class="language-sql">-- 5. 使用派生表优化复杂查询-- 原始复杂查询SELECT     e.emp_name,    d.dept_name,    (SELECT COUNT(*) FROM projects p WHERE p.manager_id = e.emp_id) as project_countFROM employees eJOIN departments d ON e.dept_id = d.dept_idWHERE e.salary &gt; (SELECT AVG(salary) FROM employees WHERE dept_id = e.dept_id);-- 优化后使用派生表WITH dept_stats AS (    SELECT         dept_id,        AVG(salary) as avg_salary    FROM employees    GROUP BY dept_id),emp_projects AS (    SELECT         manager_id,        COUNT(*) as project_count    FROM projects    GROUP BY manager_id)SELECT     e.emp_name,    d.dept_name,    COALESCE(ep.project_count, 0) as project_countFROM employees eJOIN departments d ON e.dept_id = d.dept_idJOIN dept_stats ds ON e.dept_id = ds.dept_idLEFT JOIN emp_projects ep ON e.emp_id = ep.manager_idWHERE e.salary &gt; ds.avg_salary;-- 6. 条件顺序优化-- 原始查询（选择性差的条件在前）SELECT * FROM employees WHERE status = 1  -- 可能80%的数据status=1  AND hire_date &gt; &#39;2023-01-01&#39;  -- 只有5%的数据满足  AND dept_id = 2;  -- 只有10%的数据满足-- 优化后（选择性好的条件在前）SELECT * FROM employees WHERE dept_id = 2  -- 先过滤掉90%的数据  AND hire_date &gt; &#39;2023-01-01&#39;  -- 在剩下的10%中再过滤  AND status = 1;  -- 最后过滤</code></pre><h3 id="%E5%88%86%E9%A1%B5%E6%9F%A5%E8%AF%A2%E4%BC%98%E5%8C%96%E6%96%B9%E6%A1%88" tabindex="-1">分页查询优化方案</h3><p><strong>传统分页的问题：</strong></p><pre><code class="language-sql">-- 问题：偏移量大时性能差SELECT * FROM employees ORDER BY emp_id LIMIT 100000, 20;  -- 需要扫描100000+20行-- 使用索引覆盖优化SELECT emp_id, emp_name, email  -- 只选择需要的列FROM employees ORDER BY emp_id LIMIT 100000, 20;-- 添加覆盖索引ALTER TABLE employees ADD INDEX idx_cover (emp_id, emp_name, email);</code></pre><p><strong>高效分页方案：</strong></p><pre><code class="language-sql">-- 方案1：游标分页（推荐）-- 第一页SELECT * FROM employees ORDER BY emp_id LIMIT 20;-- 获取最后一条记录的emp_id: 比如是20-- 第二页SELECT * FROM employees WHERE emp_id &gt; 20  -- 使用游标ORDER BY emp_id LIMIT 20;-- 方案2：使用子查询（MySQL 8.0+）SELECT * FROM employees WHERE emp_id IN (    SELECT emp_id FROM employees     ORDER BY emp_id     LIMIT 100000, 20)ORDER BY emp_id;-- 方案3：延迟关联SELECT e.* FROM employees eJOIN (    SELECT emp_id     FROM employees     ORDER BY emp_id     LIMIT 100000, 20) tmp ON e.emp_id = tmp.emp_id;-- 方案4：业务分页（按时间范围）SELECT * FROM employees WHERE hire_date BETWEEN &#39;2023-01-01&#39; AND &#39;2023-12-31&#39;ORDER BY emp_id LIMIT 20;</code></pre><p><strong>复杂分页场景：</strong></p><pre><code class="language-sql">-- 多条件排序分页CREATE INDEX idx_dept_hire_salary ON employees (dept_id, hire_date, salary);-- 游标分页实现-- 第一页SELECT * FROM employees WHERE dept_id = 1ORDER BY hire_date DESC, salary DESC, emp_idLIMIT 20;-- 假设最后一条记录：hire_date=&#39;2023-05-20&#39;, salary=8000, emp_id=150-- 第二页SELECT * FROM employees WHERE dept_id = 1  AND (      hire_date &lt; &#39;2023-05-20&#39;       OR (hire_date = &#39;2023-05-20&#39; AND salary &lt; 8000)      OR (hire_date = &#39;2023-05-20&#39; AND salary = 8000 AND emp_id &gt; 150)  )ORDER BY hire_date DESC, salary DESC, emp_idLIMIT 20;</code></pre><h3 id="%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%87%8F%E6%9F%A5%E8%AF%A2%E5%A4%84%E7%90%86%E7%AD%96%E7%95%A5" tabindex="-1">大数据量查询处理策略</h3><p><strong>分批处理策略：</strong></p><pre><code class="language-sql">-- 大量数据更新分批处理DELIMITER //CREATE PROCEDURE BatchUpdateEmployees()BEGIN    DECLARE done INT DEFAULT 0;    DECLARE batch_size INT DEFAULT 1000;    DECLARE current_id INT DEFAULT 0;    DECLARE max_id INT;        -- 获取最大ID    SELECT MAX(emp_id) INTO max_id FROM employees;        WHILE current_id &lt; max_id DO        -- 分批更新        UPDATE employees         SET last_updated = NOW()        WHERE emp_id &gt; current_id           AND emp_id &lt;= current_id + batch_size;                -- 记录处理进度        INSERT INTO batch_process_log (process_name, processed_id, processed_at)        VALUES (&#39;employee_update&#39;, current_id + batch_size, NOW());                SET current_id = current_id + batch_size;                -- 短暂休眠，减少对系统的影响        DO SLEEP(0.1);    END WHILE;    END //DELIMITER ;</code></pre><p><strong>数据归档策略：</strong></p><pre><code class="language-sql">-- 历史数据归档CREATE TABLE employees_archive LIKE employees;-- 归档过程DELIMITER //CREATE PROCEDURE ArchiveOldEmployees(IN cutoff_date DATE)BEGIN    DECLARE EXIT HANDLER FOR SQLEXCEPTION    BEGIN        ROLLBACK;        RESIGNAL;    END;        START TRANSACTION;        -- 归档数据    INSERT INTO employees_archive     SELECT * FROM employees     WHERE hire_date &lt; cutoff_date;        -- 删除已归档数据    DELETE FROM employees     WHERE hire_date &lt; cutoff_date;        COMMIT;    END //DELIMITER ;</code></pre><h2 id="2.-%E7%B4%A2%E5%BC%95%E4%BC%98%E5%8C%96%E5%AE%9E%E6%88%98" tabindex="-1">2. 索引优化实战</h2><h3 id="%E7%B4%A2%E5%BC%95%E5%88%9B%E5%BB%BA%E7%AD%96%E7%95%A5%E4%B8%8E%E5%8E%9F%E5%88%99" tabindex="-1">索引创建策略与原则</h3><p><strong>索引选择策略：</strong></p><pre><code class="language-sql">-- 索引创建决策流程SELECT     TABLE_NAME,    COLUMN_NAME,    DATA_TYPE,    IS_NULLABLE,    COLUMN_DEFAULT,    CHARACTER_MAXIMUM_LENGTH,    NUMERIC_PRECISION,    NUMERIC_SCALEFROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = &#39;your_database&#39;  AND TABLE_NAME = &#39;your_table&#39;;-- 计算索引选择性SELECT     COUNT(DISTINCT dept_id) as distinct_values,    COUNT(*) as total_rows,    ROUND(COUNT(DISTINCT dept_id) / COUNT(*) * 100, 2) as selectivity_percentFROM employees;-- 选择性建议：-- &gt; 20%：适合创建索引-- 5%-20%：根据查询频率决定-- &lt; 5%：通常不适合单独创建索引</code></pre><p><strong>复合索引设计：</strong></p><pre><code class="language-sql">-- 好的复合索引设计CREATE TABLE orders (    order_id BIGINT PRIMARY KEY,    customer_id BIGINT NOT NULL,    order_date DATETIME NOT NULL,    status ENUM(&#39;pending&#39;, &#39;paid&#39;, &#39;shipped&#39;, &#39;delivered&#39;) NOT NULL,    total_amount DECIMAL(12,2) NOT NULL,    -- 其他字段...        -- 复合索引设计    INDEX idx_customer_date (customer_id, order_date),           -- 客户订单查询    INDEX idx_date_status (order_date, status),                  -- 按状态查询订单    INDEX idx_status_date (status, order_date),                  -- 状态+日期查询    INDEX idx_customer_status_date (customer_id, status, order_date) -- 覆盖多种查询);-- 索引使用分析EXPLAIN SELECT * FROM orders WHERE customer_id = 123   AND order_date &gt;= &#39;2023-01-01&#39;  AND status = &#39;shipped&#39;;-- 可能使用 idx_customer_date 或 idx_customer_status_date</code></pre><h3 id="%E7%B4%A2%E5%BC%95%E5%A4%B1%E6%95%88%E5%9C%BA%E6%99%AF%E5%88%86%E6%9E%90" tabindex="-1">索引失效场景分析</h3><p><strong>常见索引失效场景：</strong></p><pre><code class="language-sql">-- 1. 对索引列使用函数-- 索引失效SELECT * FROM employees WHERE YEAR(hire_date) = 2023;SELECT * FROM employees WHERE UPPER(emp_name) = &#39;ZHANGSAN&#39;;-- 优化后SELECT * FROM employees WHERE hire_date BETWEEN &#39;2023-01-01&#39; AND &#39;2023-12-31&#39;;SELECT * FROM employees WHERE emp_name = &#39;zhangsan&#39;;  -- 应用层处理大小写-- 2. 隐式类型转换-- 索引失效（如果phone是varchar类型）SELECT * FROM users WHERE phone = 13800138000;-- 优化后SELECT * FROM users WHERE phone = &#39;13800138000&#39;;-- 3. OR条件使用不当-- 索引可能失效SELECT * FROM employees WHERE dept_id = 1 OR emp_name LIKE &#39;张%&#39;;-- 优化后SELECT * FROM employees WHERE dept_id = 1UNIONSELECT * FROM employees WHERE emp_name LIKE &#39;张%&#39;;-- 4. 使用NOT、!=、&lt;&gt; -- 索引通常失效SELECT * FROM employees WHERE dept_id != 1;-- 优化方案：考虑是否真的需要排除，或者使用范围查询SELECT * FROM employees WHERE dept_id &gt; 1 OR dept_id &lt; 1;-- 5. LIKE以通配符开头-- 索引失效SELECT * FROM employees WHERE emp_name LIKE &#39;%张%&#39;;-- 优化方案：使用全文索引或反转存储</code></pre><p><strong>复合索引失效场景：</strong></p><pre><code class="language-sql">-- 创建测试索引CREATE INDEX idx_dept_hire_salary ON employees (dept_id, hire_date, salary);-- 1. 不满足最左前缀原则-- 索引失效SELECT * FROM employees WHERE hire_date &gt; &#39;2023-01-01&#39;;-- 只能使用hire_date的索引，不能使用复合索引-- 2. 跳过中间列-- 部分使用索引（只用到dept_id）SELECT * FROM employees WHERE dept_id = 1 AND salary &gt; 5000;-- 3. 范围查询后的列无法使用索引-- 只使用dept_id和hire_date进行索引查找，salary使用索引过滤SELECT * FROM employees WHERE dept_id = 1   AND hire_date &gt; &#39;2023-01-01&#39;  AND salary &gt; 5000;-- 4. 索引列顺序影响-- 好的顺序：等值查询列在前，范围查询列在后CREATE INDEX idx_good_order ON employees (dept_id, salary, hire_date);</code></pre><h3 id="%E5%89%8D%E7%BC%80%E7%B4%A2%E5%BC%95%E4%B8%8E%E5%87%BD%E6%95%B0%E7%B4%A2%E5%BC%95" tabindex="-1">前缀索引与函数索引</h3><p><strong>前缀索引：</strong></p><pre><code class="language-sql">-- 文本列的前缀索引-- 计算合适的前缀长度SELECT     COUNT(DISTINCT LEFT(emp_name, 5)) as distinct_5,    COUNT(DISTINCT LEFT(emp_name, 10)) as distinct_10,    COUNT(DISTINCT emp_name) as distinct_full,    COUNT(*) as total_rowsFROM employees;-- 创建前缀索引CREATE INDEX idx_emp_name_prefix ON employees (emp_name(10));-- 前缀索引的使用限制-- 不能用于ORDER BY和GROUP BY完整列-- 不能用于覆盖索引-- 查看索引统计信息ANALYZE TABLE employees;SHOW INDEX FROM employees;</code></pre><p><strong>函数索引（MySQL 8.0+）：</strong></p><pre><code class="language-sql">-- 创建函数索引CREATE TABLE products (    product_id INT PRIMARY KEY,    product_name VARCHAR(200),    product_data JSON,    created_at DATESTAMP);-- 函数索引示例CREATE INDEX idx_product_name_upper ON products ((UPPER(product_name)));CREATE INDEX idx_product_price ON products ((CAST(JSON_EXTRACT(product_data, &#39;$.price&#39;) AS DECIMAL(10,2))));CREATE INDEX idx_created_date ON products ((DATE(created_at)));-- 使用函数索引查询SELECT * FROM products WHERE UPPER(product_name) = UPPER(&#39;iPhone 14&#39;);SELECT * FROM products WHERE CAST(JSON_EXTRACT(product_data, &#39;$.price&#39;) AS DECIMAL(10,2)) &gt; 1000;SELECT * FROM products WHERE DATE(created_at) = &#39;2023-01-01&#39;;</code></pre><h3 id="%E7%B4%A2%E5%BC%95%E7%BB%B4%E6%8A%A4%E4%B8%8E%E9%87%8D%E5%BB%BA%E7%AD%96%E7%95%A5" tabindex="-1">索引维护与重建策略</h3><p><strong>索引监控：</strong></p><pre><code class="language-sql">-- 监控索引使用情况SELECT     OBJECT_SCHEMA,    OBJECT_NAME,    INDEX_NAME,    COUNT_FETCH,    COUNT_INSERT,    COUNT_UPDATE,    COUNT_DELETEFROM performance_schema.table_io_waits_summary_by_index_usageWHERE OBJECT_SCHEMA = &#39;your_database&#39;ORDER BY COUNT_FETCH DESC;-- 查找未使用的索引SELECT     OBJECT_SCHEMA,    OBJECT_NAME,    INDEX_NAMEFROM performance_schema.table_io_waits_summary_by_index_usageWHERE INDEX_NAME IS NOT NULL  AND COUNT_FETCH = 0  AND COUNT_INSERT = 0  AND COUNT_UPDATE = 0  AND COUNT_DELETE = 0;</code></pre><p><strong>索引维护操作：</strong></p><pre><code class="language-sql">-- 索引重建ALTER TABLE employees ENGINE=InnoDB;  -- 重建表，包括所有索引ALTER TABLE employees DROP INDEX idx_old, ADD INDEX idx_new (columns);OPTIMIZE TABLE employees;  -- 重建表，整理碎片-- 在线索引操作（MySQL 5.6+）ALTER TABLE employees ADD INDEX idx_new_column (new_column),ALGORITHM=INPLACE, LOCK=NONE;-- 索引统计信息更新ANALYZE TABLE employees;  -- 更新统计信息-- 监控索引大小SELECT     TABLE_NAME,    INDEX_NAME,    ROUND(SUM(INDEX_LENGTH) / 1024 / 1024, 2) as index_size_mbFROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = &#39;your_database&#39;GROUP BY TABLE_NAME, INDEX_NAMEORDER BY index_size_mb DESC;</code></pre><h3 id="%E5%85%A8%E6%96%87%E7%B4%A2%E5%BC%95%E4%B8%8E%E7%A9%BA%E9%97%B4%E7%B4%A2%E5%BC%95%E5%BA%94%E7%94%A8" tabindex="-1">全文索引与空间索引应用</h3><p><strong>全文索引：</strong></p><pre><code class="language-sql">-- 创建全文索引ALTER TABLE articles ADD FULLTEXT idx_content_ft (title, content);-- 全文索引查询SELECT     article_id,    title,    MATCH(title, content) AGAINST(&#39;数据库 优化&#39;) as relevance_scoreFROM articlesWHERE MATCH(title, content) AGAINST(&#39;+数据库 +优化&#39; IN BOOLEAN MODE)ORDER BY relevance_score DESC;-- 全文索引配置-- 查看最小词长SHOW VARIABLES LIKE &#39;innodb_ft_min_token_size&#39;;SHOW VARIABLES LIKE &#39;innodb_ft_max_token_size&#39;;-- 停用词配置SHOW VARIABLES LIKE &#39;innodb_ft_enable_stopword&#39;;SHOW VARIABLES LIKE &#39;innodb_ft_server_stopword_table&#39;;-- 重建全文索引ALTER TABLE articles DROP INDEX idx_content_ft;ALTER TABLE articles ADD FULLTEXT idx_content_ft (title, content);</code></pre><p><strong>空间索引：</strong></p><pre><code class="language-sql">-- 创建空间数据表CREATE TABLE locations (    location_id INT PRIMARY KEY AUTO_INCREMENT,    location_name VARCHAR(100),    coordinates POINT NOT NULL,    area POLYGON,    SPATIAL INDEX idx_coordinates (coordinates),    SPATIAL INDEX idx_area (area));-- 空间索引查询SELECT     location_name,    ST_AsText(coordinates) as coordsFROM locationsWHERE ST_Contains(    ST_GeomFromText(&#39;POLYGON((116.3 39.8, 116.5 39.8, 116.5 40.0, 116.3 40.0, 116.3 39.8))&#39;),    coordinates);-- 距离查询优化SELECT     l1.location_name as place1,    l2.location_name as place2,    ST_Distance_Sphere(l1.coordinates, l2.coordinates) as distanceFROM locations l1JOIN locations l2 ON l1.location_id &lt; l2.location_idWHERE ST_Distance_Sphere(l1.coordinates, l2.coordinates) &lt; 5000;  -- 5公里内</code></pre><h2 id="3.-%E6%9C%8D%E5%8A%A1%E5%99%A8%E5%8F%82%E6%95%B0%E8%B0%83%E4%BC%98" tabindex="-1">3. 服务器参数调优</h2><h3 id="%E5%86%85%E5%AD%98%E5%8F%82%E6%95%B0%E4%BC%98%E5%8C%96" tabindex="-1">内存参数优化</h3><p><strong>InnoDB缓冲池优化：</strong></p><pre><code class="language-sql">-- 查看当前缓冲池配置SHOW VARIABLES LIKE &#39;innodb_buffer_pool_size&#39;;SHOW VARIABLES LIKE &#39;innodb_buffer_pool_instances&#39;;SHOW VARIABLES LIKE &#39;innodb_buffer_pool_chunk_size&#39;;-- 计算合适的缓冲池大小SELECT     @@innodb_buffer_pool_size / 1024 / 1024 / 1024 as current_buffer_pool_gb,    @@innodb_buffer_pool_instances as buffer_pool_instances;-- 缓冲池使用监控SHOW ENGINE INNODB STATUS\G-- 查看 BUFFER POOL AND MEMORY 部分-- 在线调整缓冲池（MySQL 5.7+）SET GLOBAL innodb_buffer_pool_size = 8589934592;  -- 8GB-- 监控缓冲池命中率SELECT     (1 - (variable_value / (        SELECT variable_value         FROM information_schema.global_status         WHERE variable_name = &#39;innodb_buffer_pool_read_requests&#39;    ))) * 100 as buffer_pool_hit_rateFROM information_schema.global_status WHERE variable_name = &#39;innodb_buffer_pool_reads&#39;;</code></pre><p><strong>内存参数配置建议：</strong></p><pre><code class="language-ini"># my.cnf 内存配置示例[mysqld]# 缓冲池配置（建议为系统内存的70-80%）innodb_buffer_pool_size = 16Ginnodb_buffer_pool_instances = 8innodb_buffer_pool_chunk_size = 128M# 日志缓冲区innodb_log_buffer_size = 256M# 排序缓冲区sort_buffer_size = 2Mread_buffer_size = 2Mread_rnd_buffer_size = 2Mjoin_buffer_size = 2M# 临时表tmp_table_size = 64Mmax_heap_table_size = 64M# 连接内存thread_cache_size = 100table_open_cache = 4000table_definition_cache = 2000</code></pre><h3 id="i%2Fo%E5%8F%82%E6%95%B0%E8%B0%83%E4%BC%98" tabindex="-1">I/O参数调优</h3><p><strong>InnoDB I/O优化：</strong></p><pre><code class="language-sql">-- 查看I/O相关配置SHOW VARIABLES LIKE &#39;innodb_io_capacity%&#39;;SHOW VARIABLES LIKE &#39;innodb_flush_log_at_trx_commit&#39;;SHOW VARIABLES LIKE &#39;innodb_flush_method&#39;;SHOW VARIABLES LIKE &#39;innodb_read_io_threads&#39;;SHOW VARIABLES LIKE &#39;innodb_write_io_threads&#39;;-- I/O性能监控SHOW STATUS LIKE &#39;innodb%io%&#39;;</code></pre><p><strong>I/O参数配置建议：</strong></p><pre><code class="language-ini"># my.cnf I/O配置示例[mysqld]# I/O容量（根据存储设备性能调整）# SSD: 2000-10000, HDD: 200-500innodb_io_capacity = 2000innodb_io_capacity_max = 4000# 日志刷盘策略# 1: 最高安全性（每次提交刷盘）# 2: 折中（每秒刷盘）# 0: 最高性能（每秒刷盘，崩溃可能丢失1秒数据）innodb_flush_log_at_trx_commit = 1# 刷盘方法（Linux推荐O_DIRECT）innodb_flush_method = O_DIRECT# I/O线程数innodb_read_io_threads = 8innodb_write_io_threads = 8# 预读设置innodb_random_read_ahead = ON# 双写缓冲（SSD可考虑关闭）innodb_doublewrite = ON</code></pre><h3 id="%E8%BF%9E%E6%8E%A5%E6%95%B0%E4%B8%8E%E4%BC%9A%E8%AF%9D%E7%AE%A1%E7%90%86" tabindex="-1">连接数与会话管理</h3><p><strong>连接配置优化：</strong></p><pre><code class="language-sql">-- 查看连接相关配置SHOW VARIABLES LIKE &#39;max_connections&#39;;SHOW VARIABLES LIKE &#39;max_user_connections&#39;;SHOW VARIABLES LIKE &#39;thread_cache_size&#39;;SHOW VARIABLES LIKE &#39;wait_timeout&#39;;SHOW VARIABLES LIKE &#39;interactive_timeout&#39;;-- 监控连接状态SHOW STATUS LIKE &#39;Threads_%&#39;;SHOW PROCESSLIST;-- 连接使用分析SELECT     USER,    HOST,    DB,    COMMAND,    TIME,    STATE,    LEFT(INFO, 100) as query_sampleFROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != &#39;Sleep&#39;ORDER BY TIME DESC;</code></pre><p><strong>连接优化配置：</strong></p><pre><code class="language-ini"># my.cnf 连接配置示例[mysqld]# 最大连接数max_connections = 1000# 用户最大连接数max_user_connections = 500# 线程缓存thread_cache_size = 100# 超时设置wait_timeout = 600interactive_timeout = 600# 连接限制max_connect_errors = 100000# 反向解析（建议关闭提升性能）skip_name_resolve = 1</code></pre><h3 id="%E5%A4%8D%E5%88%B6%E5%8F%82%E6%95%B0%E9%85%8D%E7%BD%AE%E4%BC%98%E5%8C%96" tabindex="-1">复制参数配置优化</h3><p><strong>主从复制优化：</strong></p><pre><code class="language-sql">-- 查看复制配置SHOW VARIABLES LIKE &#39;binlog_format&#39;;SHOW VARIABLES LIKE &#39;sync_binlog&#39;;SHOW VARIABLES LIKE &#39;innodb_flush_log_at_trx_commit&#39;;-- 监控复制状态SHOW SLAVE STATUS\G-- 复制性能参数SHOW VARIABLES LIKE &#39;slave_parallel_workers&#39;;SHOW VARIABLES LIKE &#39;slave_parallel_type&#39;;</code></pre><p><strong>复制优化配置：</strong></p><pre><code class="language-ini"># my.cnf 复制配置示例[mysqld]# 二进制日志格式binlog_format = ROW# 二进制日志同步sync_binlog = 1# 从库并行复制slave_parallel_workers = 8slave_parallel_type = LOGICAL_CLOCK# 复制延迟控制slave_preserve_commit_order = 1# 二进制日志保留expire_logs_days = 7binlog_expire_logs_seconds = 604800</code></pre><h3 id="%E7%9B%91%E6%8E%A7%E6%8C%87%E6%A0%87%E4%B8%8E%E8%B0%83%E4%BC%98%E4%BE%9D%E6%8D%AE" tabindex="-1">监控指标与调优依据</h3><p><strong>关键性能指标监控：</strong></p><pre><code class="language-sql">-- 创建性能监控表CREATE TABLE performance_metrics (    metric_id INT AUTO_INCREMENT PRIMARY KEY,    metric_name VARCHAR(100) NOT NULL,    metric_value DECIMAL(20,4),    collected_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,    notes TEXT);-- 收集关键指标INSERT INTO performance_metrics (metric_name, metric_value)SELECT &#39;qps&#39;, VARIABLE_VALUEFROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME = &#39;Queries&#39;;INSERT INTO performance_metrics (metric_name, metric_value)SELECT &#39;tps&#39;, VARIABLE_VALUEFROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME = &#39;Com_commit&#39;;-- 缓冲池命中率INSERT INTO performance_metrics (metric_name, metric_value)SELECT &#39;buffer_pool_hit_rate&#39;,     (1 - (         SELECT VARIABLE_VALUE         FROM information_schema.GLOBAL_STATUS         WHERE VARIABLE_NAME = &#39;Innodb_buffer_pool_reads&#39;    ) / (        SELECT VARIABLE_VALUE         FROM information_schema.GLOBAL_STATUS         WHERE VARIABLE_NAME = &#39;Innodb_buffer_pool_read_requests&#39;    )) * 100;-- 连接使用率INSERT INTO performance_metrics (metric_name, metric_value)SELECT &#39;connection_usage_rate&#39;,    (         SELECT VARIABLE_VALUE         FROM information_schema.GLOBAL_STATUS         WHERE VARIABLE_NAME = &#39;Threads_connected&#39;    ) / (        SELECT VARIABLE_VALUE         FROM information_schema.GLOBAL_VARIABLES         WHERE VARIABLE_NAME = &#39;max_connections&#39;    ) * 100;</code></pre><p><strong>性能调优检查清单：</strong></p><pre><code class="language-sql">-- 系统健康检查查询SELECT     &#39;连接数&#39; as metric,    CONCAT(        (SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME = &#39;Threads_connected&#39;),        &#39;/&#39;,        (SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_VARIABLES WHERE VARIABLE_NAME = &#39;max_connections&#39;)    ) as valueUNION ALLSELECT     &#39;缓冲池命中率&#39;,    CONCAT(        ROUND((1 - (             SELECT VARIABLE_VALUE             FROM information_schema.GLOBAL_STATUS             WHERE VARIABLE_NAME = &#39;Innodb_buffer_pool_reads&#39;        ) / (            SELECT VARIABLE_VALUE             FROM information_schema.GLOBAL_STATUS             WHERE VARIABLE_NAME = &#39;Innodb_buffer_pool_read_requests&#39;        )) * 100, 2),        &#39;%&#39;    )UNION ALLSELECT     &#39;临时表磁盘使用率&#39;,    CONCAT(        ROUND((            SELECT VARIABLE_VALUE             FROM information_schema.GLOBAL_STATUS             WHERE VARIABLE_NAME = &#39;Created_tmp_disk_tables&#39;        ) / NULLIF(            SELECT VARIABLE_VALUE             FROM information_schema.GLOBAL_STATUS             WHERE VARIABLE_NAME = &#39;Created_tmp_tables&#39;        , 0) * 100, 2),        &#39;%&#39;    )UNION ALLSELECT     &#39;慢查询比例&#39;,    CONCAT(        ROUND((            SELECT VARIABLE_VALUE             FROM information_schema.GLOBAL_STATUS             WHERE VARIABLE_NAME = &#39;Slow_queries&#39;        ) / NULLIF(            SELECT VARIABLE_VALUE             FROM information_schema.GLOBAL_STATUS             WHERE VARIABLE_NAME = &#39;Questions&#39;        , 0) * 100, 4),        &#39;%&#39;    );</code></pre><h2 id="%E6%80%BB%E7%BB%93" tabindex="-1">总结</h2><p>通过本篇的深入学习，我们掌握了MySQL性能优化的核心技能：</p><ol><li><strong>查询优化</strong>：深入理解执行计划，掌握慢查询分析和查询重写技巧</li><li><strong>索引优化</strong>：合理设计索引，避免索引失效，掌握索引维护策略</li><li><strong>服务器调优</strong>：优化内存、I/O、连接等关键参数配置</li><li><strong>监控体系</strong>：建立完善的性能监控和报警机制</li></ol><p><strong>关键优化原则：</strong></p><ul><li>测量优先：基于实际监控数据进行优化</li><li>渐进优化：每次只调整一个参数，观察效果</li><li>平衡考虑：在性能、安全、可靠性之间找到平衡点</li><li>预防为主：建立常规维护和监控机制</li></ul><p><strong>性能优化层次：</strong></p><ol><li><strong>SQL和索引优化</strong>：效果最明显，成本最低</li><li><strong>数据库配置优化</strong>：需要深入了解MySQL内部机制</li><li><strong>架构优化</strong>：读写分离、分库分表等</li><li><strong>硬件优化</strong>：SSD、更多内存、更好CPU</li></ol><p><strong>动手练习：</strong></p><ol><li>分析你当前系统的慢查询，并实施优化</li><li>检查索引使用情况，删除无用索引，添加必要索引</li><li>根据服务器配置调整MySQL参数</li><li>建立性能监控体系，定期收集关键指标</li><li>实施定期的数据库维护操作</li></ol><p>欢迎在评论区分享你的性能优化经验和遇到的问题！</p>]]>
                    </description>
                    <pubDate>Sun, 11 May 2025 10:10:32 EDT</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[MySql入门：MySQL数据类型与表设计]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2949</link>
                    <description>
                            <![CDATA[<h1 id="mysql%E6%95%B0%E6%8D%AE%E7%B1%BB%E5%9E%8B%E4%B8%8E%E8%A1%A8%E8%AE%BE%E8%AE%A1" tabindex="-1">MySQL数据类型与表设计</h1><blockquote><p>在数据库设计中，选择合适的数据类型和设计良好的表结构是构建高性能应用的基石。今天，我们将深入探讨MySQL的数据类型选择策略、表设计原则以及索引优化技巧，帮助你构建既高效又可维护的数据库结构。</p></blockquote><h2 id="1.-%E6%95%B0%E6%8D%AE%E7%B1%BB%E5%9E%8B%E6%B7%B1%E5%BA%A6%E8%A7%A3%E6%9E%90" tabindex="-1">1. 数据类型深度解析</h2><h3 id="%E6%95%B0%E5%80%BC%E7%B1%BB%E5%9E%8B%EF%BC%9A%E6%95%B4%E5%9E%8B%E3%80%81%E6%B5%AE%E7%82%B9%E5%9E%8B%E3%80%81%E5%AE%9A%E7%82%B9%E6%95%B0%E7%9A%84%E9%80%89%E6%8B%A9%E7%AD%96%E7%95%A5" tabindex="-1">数值类型：整型、浮点型、定点数的选择策略</h3><p><strong>整型数据类型对比：</strong></p><table><thead><tr><th>类型</th><th>存储空间</th><th>有符号范围</th><th>无符号范围</th><th>适用场景</th></tr></thead><tbody><tr><td>TINYINT</td><td>1字节</td><td>-128 到 127</td><td>0 到 255</td><td>状态标志、年龄、小范围计数</td></tr><tr><td>SMALLINT</td><td>2字节</td><td>-32,768 到 32,767</td><td>0 到 65,535</td><td>端口号、中等范围计数</td></tr><tr><td>MEDIUMINT</td><td>3字节</td><td>-8,388,608 到 8,388,607</td><td>0 到 16,777,215</td><td>用户ID、文章ID</td></tr><tr><td>INT</td><td>4字节</td><td>-2,147,483,648 到 2,147,483,647</td><td>0 到 4,294,967,295</td><td>订单ID、大范围计数</td></tr><tr><td>BIGINT</td><td>8字节</td><td>-2^63 到 2^63-1</td><td>0 到 2^64-1</td><td>分布式ID、极大范围计数</td></tr></tbody></table><p><strong>数值类型选择实战：</strong></p><pre><code class="language-sql">-- 用户表 - 合理的数值类型选择CREATE TABLE users (    id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY COMMENT &#39;用户ID&#39;,    age TINYINT UNSIGNED COMMENT &#39;年龄&#39;,    status TINYINT DEFAULT 1 COMMENT &#39;状态:1正常,0禁用&#39;,    login_count INT UNSIGNED DEFAULT 0 COMMENT &#39;登录次数&#39;,    balance DECIMAL(10,2) UNSIGNED DEFAULT 0.00 COMMENT &#39;账户余额&#39;) COMMENT=&#39;用户表&#39;;-- 订单表 - 货币相关使用DECIMALCREATE TABLE orders (    id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,    user_id BIGINT UNSIGNED NOT NULL,    total_amount DECIMAL(12,2) NOT NULL COMMENT &#39;订单总金额&#39;,    tax_amount DECIMAL(10,2) NOT NULL COMMENT &#39;税费&#39;,    discount_amount DECIMAL(8,2) DEFAULT 0.00 COMMENT &#39;折扣金额&#39;,    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP) COMMENT=&#39;订单表&#39;;</code></pre><p><strong>浮点数与定点数选择：</strong></p><pre><code class="language-sql">-- 科学计算 - 使用浮点数CREATE TABLE sensor_data (    id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,    temperature FLOAT COMMENT &#39;温度，允许精度损失&#39;,    pressure DOUBLE COMMENT &#39;压力，更高精度&#39;,    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP);-- 金融计算 - 必须使用DECIMALCREATE TABLE financial_records (    id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,    transaction_amount DECIMAL(15,4) NOT NULL, -- 精确计算    exchange_rate DECIMAL(10,6) NOT NULL,      -- 汇率需要高精度    calculated_amount DECIMAL(15,4) AS (transaction_amount * exchange_rate));</code></pre><h3 id="%E5%AD%97%E7%AC%A6%E4%B8%B2%E7%B1%BB%E5%9E%8B%EF%BC%9Achar%E3%80%81varchar%E3%80%81text%E7%9A%84%E5%BA%94%E7%94%A8%E5%9C%BA%E6%99%AF" tabindex="-1">字符串类型：CHAR、VARCHAR、TEXT的应用场景</h3><p><strong>字符串类型对比分析：</strong></p><table><thead><tr><th>类型</th><th>最大长度</th><th>存储特点</th><th>适用场景</th><th>性能影响</th></tr></thead><tbody><tr><td>CHAR(N)</td><td>255字符</td><td>定长，不足补空格</td><td>固定长度数据(MD5、UUID)</td><td>读取快，可能浪费空间</td></tr><tr><td>VARCHAR(N)</td><td>65,535字节</td><td>变长，额外1-2字节存储长度</td><td>用户名、邮箱、地址</td><td>空间效率高，读取稍慢</td></tr><tr><td>TINYTEXT</td><td>255字节</td><td>变长，不支持默认值</td><td>短文本描述</td><td>类似VARCHAR</td></tr><tr><td>TEXT</td><td>65,535字节</td><td>变长，存储较大文本</td><td>文章内容、评论</td><td>可能产生临时表</td></tr><tr><td>MEDIUMTEXT</td><td>16MB</td><td>变长，大文本存储</td><td>大型文档、日志</td><td>影响查询性能</td></tr><tr><td>LONGTEXT</td><td>4GB</td><td>变长，极大文本存储</td><td>二进制数据、历史记录</td><td>谨慎使用</td></tr></tbody></table><p><strong>字符串类型实战应用：</strong></p><pre><code class="language-sql">-- 用户表 - 合理的字符串类型选择CREATE TABLE user_profiles (    user_id BIGINT UNSIGNED PRIMARY KEY,    username VARCHAR(50) NOT NULL UNIQUE COMMENT &#39;用户名&#39;,    email VARCHAR(255) NOT NULL UNIQUE COMMENT &#39;邮箱&#39;,    phone CHAR(11) COMMENT &#39;手机号，固定11位&#39;,    id_card CHAR(18) COMMENT &#39;身份证号，固定18位&#39;,    avatar_url VARCHAR(500) COMMENT &#39;头像URL&#39;,    bio TEXT COMMENT &#39;个人简介，可变长文本&#39;,        -- 索引优化    INDEX idx_username (username),    INDEX idx_email (email),    INDEX idx_phone (phone)) COMMENT=&#39;用户档案表&#39;;-- 文章表 - 大文本处理CREATE TABLE articles (    id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,    title VARCHAR(200) NOT NULL COMMENT &#39;文章标题&#39;,    summary VARCHAR(500) COMMENT &#39;文章摘要&#39;,    content LONGTEXT NOT NULL COMMENT &#39;文章内容&#39;,    tags VARCHAR(300) COMMENT &#39;标签，逗号分隔&#39;,        -- 全文索引    FULLTEXT idx_content (title, summary, content),    INDEX idx_created (created_at)) COMMENT=&#39;文章表&#39;;</code></pre><h3 id="%E6%97%A5%E6%9C%9F%E6%97%B6%E9%97%B4%E7%B1%BB%E5%9E%8B%EF%BC%9Adatetime%E3%80%81timestamp%E3%80%81date%E7%9A%84%E5%B7%AE%E5%BC%82" tabindex="-1">日期时间类型：DATETIME、TIMESTAMP、DATE的差异</h3><p><strong>日期时间类型深度对比：</strong></p><table><thead><tr><th>类型</th><th>存储空间</th><th>范围</th><th>时区处理</th><th>自动更新</th><th>适用场景</th></tr></thead><tbody><tr><td>DATE</td><td>3字节</td><td>1000-01-01 到 9999-12-31</td><td>无</td><td>不支持</td><td>生日、日期</td></tr><tr><td>TIME</td><td>3字节</td><td>-838:59:59 到 838:59:59</td><td>无</td><td>不支持</td><td>持续时间</td></tr><tr><td>DATETIME</td><td>8字节</td><td>1000-01-01 00:00:00 到 9999-12-31 23:59:59</td><td>无</td><td>不支持</td><td>创建时间、日志时间</td></tr><tr><td>TIMESTAMP</td><td>4字节</td><td>1970-01-01 00:00:01 到 2038-01-19 03:14:07 UTC</td><td>自动转换</td><td>支持</td><td>更新时间、系统时间</td></tr><tr><td>YEAR</td><td>1字节</td><td>1901 到 2155</td><td>无</td><td>不支持</td><td>年份</td></tr></tbody></table><p><strong>日期时间类型实战：</strong></p><pre><code class="language-sql">-- 时间字段设计最佳实践CREATE TABLE time_demo (    id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,        -- 创建时间 - 使用DATETIME，不涉及时区转换    created_at DATETIME DEFAULT CURRENT_TIMESTAMP COMMENT &#39;记录创建时间&#39;,        -- 更新时间 - 使用TIMESTAMP，自动更新    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP                  ON UPDATE CURRENT_TIMESTAMP COMMENT &#39;最后更新时间&#39;,        -- 业务时间 - 使用DATE    birth_date DATE COMMENT &#39;出生日期&#39;,    event_date DATE COMMENT &#39;事件日期&#39;,        -- 时间范围 - 使用TIME    start_time TIME COMMENT &#39;开始时间&#39;,    end_time TIME COMMENT &#39;结束时间&#39;,        -- 索引优化    INDEX idx_created (created_at),    INDEX idx_updated (updated_at),    INDEX idx_event_date (event_date));-- 时间查询优化示例SELECT * FROM time_demo WHERE created_at &gt;= &#39;2023-01-01 00:00:00&#39;   AND created_at &lt; &#39;2023-02-01 00:00:00&#39;;  SELECT * FROM time_demo WHERE event_date BETWEEN &#39;2023-01-01&#39; AND &#39;2023-01-31&#39;;-- 时间函数使用SELECT     id,    created_at,    DATE(created_at) as create_date,    HOUR(created_at) as create_hour,    DATE_FORMAT(created_at, &#39;%Y-%m-%d %H:%i:%s&#39;) as formatted_timeFROM time_demo;</code></pre><h3 id="json%E6%95%B0%E6%8D%AE%E7%B1%BB%E5%9E%8B%EF%BC%9A%E7%8E%B0%E4%BB%A3%E5%BA%94%E7%94%A8%E7%9A%84%E6%95%B0%E6%8D%AE%E5%AD%98%E5%82%A8%E6%96%B9%E6%A1%88" tabindex="-1">JSON数据类型：现代应用的数据存储方案</h3><p><strong>JSON类型优势与应用场景：</strong></p><pre><code class="language-sql">-- 动态schema数据存储CREATE TABLE product_catalog (    id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,    sku VARCHAR(50) NOT NULL UNIQUE,    basic_info JSON NOT NULL COMMENT &#39;基础信息&#39;,    specifications JSON COMMENT &#39;规格参数&#39;,    metadata JSON COMMENT &#39;元数据&#39;,        -- 生成列 + 索引    product_name VARCHAR(200)         GENERATED ALWAYS AS (basic_info-&gt;&gt;&#39;$.name&#39;) VIRTUAL,    price DECIMAL(10,2)        GENERATED ALWAYS AS (JSON_UNQUOTE(basic_info-&gt;&#39;$.price&#39;)) VIRTUAL,        -- 索引    INDEX idx_sku (sku),    INDEX idx_product_name (product_name),    INDEX idx_price (price)) COMMENT=&#39;商品目录表&#39;;-- JSON数据插入示例INSERT INTO product_catalog (sku, basic_info, specifications) VALUES (    &#39;IPHONE14-128-BLACK&#39;,    &#39;{        &quot;name&quot;: &quot;iPhone 14&quot;,        &quot;brand&quot;: &quot;Apple&quot;,        &quot;price&quot;: 5999.00,        &quot;color&quot;: &quot;黑色&quot;,        &quot;weight&quot;: 172    }&#39;,    &#39;{        &quot;screen&quot;: {&quot;size&quot;: 6.1, &quot;type&quot;: &quot;OLED&quot;},        &quot;storage&quot;: 128,        &quot;camera&quot;: {&quot;main&quot;: &quot;48MP&quot;, &quot;front&quot;: &quot;12MP&quot;},        &quot;battery&quot;: 3279    }&#39;);-- JSON查询操作SELECT     sku,    basic_info-&gt;&gt;&#39;$.name&#39; as product_name,    JSON_UNQUOTE(basic_info-&gt;&#39;$.brand&#39;) as brand,    specifications-&gt;&#39;$.screen.size&#39; as screen_size,        -- JSON路径查询    JSON_EXTRACT(basic_info, &#39;$.price&#39;) as price,        -- JSON包含检查    JSON_CONTAINS_PATH(basic_info, &#39;one&#39;, &#39;$.color&#39;) as has_color,        -- JSON数组操作    JSON_LENGTH(COALESCE(metadata-&gt;&#39;$.tags&#39;, &#39;[]&#39;)) as tag_count    FROM product_catalogWHERE basic_info-&gt;&gt;&#39;$.brand&#39; = &#39;Apple&#39;  AND specifications-&gt;&#39;$.screen.size&#39; &gt; 6.0;-- JSON更新操作UPDATE product_catalog SET basic_info = JSON_SET(    basic_info,    &#39;$.price&#39;, 5799.00,    &#39;$.discount&#39;, true)WHERE sku = &#39;IPHONE14-128-BLACK&#39;;-- JSON索引优化（MySQL 8.0+）CREATE INDEX idx_brand ON product_catalog((basic_info-&gt;&gt;&#39;$.brand&#39;));CREATE INDEX idx_screen_size ON product_catalog(    (CAST(specifications-&gt;&#39;$.screen.size&#39; AS UNSIGNED)));</code></pre><h3 id="%E7%A9%BA%E9%97%B4%E6%95%B0%E6%8D%AE%E7%B1%BB%E5%9E%8B%EF%BC%9Agis%E5%BA%94%E7%94%A8%E6%94%AF%E6%8C%81" tabindex="-1">空间数据类型：GIS应用支持</h3><p><strong>空间数据类型应用：</strong></p><pre><code class="language-sql">-- 地理位置数据存储CREATE TABLE locations (    id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,    name VARCHAR(100) NOT NULL,        -- 空间数据类型    point_coord POINT NOT NULL COMMENT &#39;点坐标&#39;,    area_boundary POLYGON COMMENT &#39;区域边界&#39;,    route_path LINESTRING COMMENT &#39;路线路径&#39;,        -- 空间索引    SPATIAL INDEX idx_point (point_coord),    SPATIAL INDEX idx_area (area_boundary),        created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP) COMMENT=&#39;地理位置表&#39;;-- 空间数据插入INSERT INTO locations (name, point_coord, area_boundary) VALUES (    &#39;公司总部&#39;,    ST_GeomFromText(&#39;POINT(116.3974 39.9093)&#39;),    ST_GeomFromText(&#39;POLYGON((116.396 39.908, 116.398 39.908, 116.398 39.910, 116.396 39.910, 116.396 39.908))&#39;));-- 空间查询SELECT     name,    ST_AsText(point_coord) as coordinates,    ST_Distance_Sphere(        point_coord,         ST_GeomFromText(&#39;POINT(116.4074 39.9042)&#39;)    ) as distance_metersFROM locationsWHERE ST_Contains(    area_boundary,     ST_GeomFromText(&#39;POINT(116.3974 39.9093)&#39;));-- 附近查询优化SELECT     name,    ST_Distance_Sphere(point_coord, @user_point) as distanceFROM locationsWHERE ST_Distance_Sphere(point_coord, @user_point) &lt; 5000  -- 5公里内ORDER BY distance ASCLIMIT 10;</code></pre><h2 id="2.-%E8%A1%A8%E8%AE%BE%E8%AE%A1%E4%B8%8E%E8%A7%84%E8%8C%83%E5%8C%96" tabindex="-1">2. 表设计与规范化</h2><h3 id="%E6%95%B0%E6%8D%AE%E5%BA%93%E8%AE%BE%E8%AE%A1%E4%B8%89%E5%A4%A7%E8%8C%83%E5%BC%8F%E5%AE%9E%E6%88%98" tabindex="-1">数据库设计三大范式实战</h3><p><strong>第一范式（1NF） - 原子性：</strong></p><pre><code class="language-sql">-- 违反1NF的设计CREATE TABLE bad_design (    user_id INT PRIMARY KEY,    user_name VARCHAR(100),    phone_numbers VARCHAR(500) -- 存储多个电话号码，用逗号分隔);-- 符合1NF的设计CREATE TABLE users (    user_id INT PRIMARY KEY,    user_name VARCHAR(100) NOT NULL);CREATE TABLE user_phones (    id INT AUTO_INCREMENT PRIMARY KEY,    user_id INT NOT NULL,    phone_type ENUM(&#39;mobile&#39;, &#39;home&#39;, &#39;work&#39;) NOT NULL,    phone_number VARCHAR(20) NOT NULL,    is_primary BOOLEAN DEFAULT FALSE,        FOREIGN KEY (user_id) REFERENCES users(user_id),    UNIQUE KEY unique_user_phone (user_id, phone_number));</code></pre><p><strong>第二范式（2NF） - 完全依赖：</strong></p><pre><code class="language-sql">-- 违反2NF的设计（订单明细包含产品信息）CREATE TABLE order_details_bad (    order_id INT,    product_id INT,    product_name VARCHAR(100),  -- 依赖于product_id，而不是完全依赖于主键    quantity INT,    unit_price DECIMAL(10,2),    PRIMARY KEY (order_id, product_id));-- 符合2NF的设计CREATE TABLE orders (    order_id INT PRIMARY KEY,    order_date DATETIME,    customer_id INT,    total_amount DECIMAL(12,2));CREATE TABLE products (    product_id INT PRIMARY KEY,    product_name VARCHAR(100) NOT NULL,    category_id INT,    unit_price DECIMAL(10,2));CREATE TABLE order_items (    order_id INT,    product_id INT,    quantity INT NOT NULL,    unit_price DECIMAL(10,2) NOT NULL, -- 下单时的价格    PRIMARY KEY (order_id, product_id),    FOREIGN KEY (order_id) REFERENCES orders(order_id),    FOREIGN KEY (product_id) REFERENCES products(product_id));</code></pre><p><strong>第三范式（3NF） - 无传递依赖：</strong></p><pre><code class="language-sql">-- 违反3NF的设计CREATE TABLE employees_bad (    emp_id INT PRIMARY KEY,    emp_name VARCHAR(100),    dept_id INT,    dept_name VARCHAR(100),  -- 传递依赖于emp_id，通过dept_id    manager_name VARCHAR(100));-- 符合3NF的设计CREATE TABLE departments (    dept_id INT PRIMARY KEY,    dept_name VARCHAR(100) NOT NULL,    manager_id INT);CREATE TABLE employees (    emp_id INT PRIMARY KEY,    emp_name VARCHAR(100) NOT NULL,    dept_id INT,    FOREIGN KEY (dept_id) REFERENCES departments(dept_id));</code></pre><h3 id="%E5%8F%8D%E8%8C%83%E5%BC%8F%E8%AE%BE%E8%AE%A1%E7%9A%84%E9%80%82%E7%94%A8%E5%9C%BA%E6%99%AF" tabindex="-1">反范式设计的适用场景</h3><p><strong>读写分离场景的反范式优化：</strong></p><pre><code class="language-sql">-- 报表查询优化 - 反范式设计CREATE TABLE user_statistics (    user_id INT PRIMARY KEY,    user_name VARCHAR(100),    total_orders INT DEFAULT 0,    total_amount DECIMAL(12,2) DEFAULT 0,    last_order_date DATETIME,    favorite_category VARCHAR(50),        -- 定期更新的统计字段    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,        INDEX idx_total_amount (total_amount),    INDEX idx_last_order (last_order_date)) COMMENT=&#39;用户统计表（反范式设计）&#39;;-- 订单列表查询优化CREATE TABLE order_summary (    order_id INT PRIMARY KEY,    order_number VARCHAR(50),    customer_id INT,    customer_name VARCHAR(100),  -- 反范式：冗余存储    total_amount DECIMAL(12,2),    status ENUM(&#39;pending&#39;, &#39;paid&#39;, &#39;shipped&#39;, &#39;completed&#39;),    created_at DATETIME,        -- 复合索引支持多种查询    INDEX idx_customer_status (customer_id, status),    INDEX idx_created_status (created_at, status),    INDEX idx_customer_created (customer_id, created_at)) COMMENT=&#39;订单汇总表（反范式设计）&#39;;</code></pre><p><strong>计数器场景的优化：</strong></p><pre><code class="language-sql">-- 高频更新计数器的优化设计CREATE TABLE post_counters (    post_id INT PRIMARY KEY,    view_count INT DEFAULT 0,    like_count INT DEFAULT 0,    comment_count INT DEFAULT 0,    share_count INT DEFAULT 0,        -- 定期同步到主表，减少主表更新压力    last_sync_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP);-- 计数更新（高频操作）UPDATE post_counters SET view_count = view_count + 1 WHERE post_id = 1234;-- 定期同步到文章主表UPDATE posts pJOIN post_counters pc ON p.id = pc.post_idSET p.view_count = pc.view_count,    p.like_count = pc.like_countWHERE pc.last_sync_at &lt; NOW() - INTERVAL 1 HOUR;</code></pre><h3 id="%E8%A1%A8%E5%85%B3%E7%B3%BB%E8%AE%BE%E8%AE%A1%EF%BC%9A%E4%B8%80%E5%AF%B9%E4%B8%80%E3%80%81%E4%B8%80%E5%AF%B9%E5%A4%9A%E3%80%81%E5%A4%9A%E5%AF%B9%E5%A4%9A" tabindex="-1">表关系设计：一对一、一对多、多对多</h3><p><strong>一对一关系设计：</strong></p><pre><code class="language-sql">-- 用户基础信息与详细信息的垂直分表CREATE TABLE users (    user_id INT PRIMARY KEY AUTO_INCREMENT,    username VARCHAR(50) UNIQUE NOT NULL,    email VARCHAR(100) UNIQUE NOT NULL,    password_hash VARCHAR(255) NOT NULL,    status TINYINT DEFAULT 1,    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP) COMMENT=&#39;用户基础表&#39;;CREATE TABLE user_profiles (    user_id INT PRIMARY KEY,    full_name VARCHAR(100),    birth_date DATE,    gender ENUM(&#39;M&#39;, &#39;F&#39;, &#39;O&#39;),    avatar_url VARCHAR(500),    bio TEXT,    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,        FOREIGN KEY (user_id) REFERENCES users(user_id) ON DELETE CASCADE) COMMENT=&#39;用户详情表（一对一）&#39;;</code></pre><p><strong>一对多关系设计：</strong></p><pre><code class="language-sql">-- 用户与订单的一对多关系CREATE TABLE customers (    customer_id INT PRIMARY KEY AUTO_INCREMENT,    customer_name VARCHAR(100) NOT NULL,    email VARCHAR(100),    phone VARCHAR(20),    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP) COMMENT=&#39;客户表&#39;;CREATE TABLE orders (    order_id INT PRIMARY KEY AUTO_INCREMENT,    order_number VARCHAR(50) UNIQUE NOT NULL,    customer_id INT NOT NULL,    order_date DATETIME NOT NULL,    total_amount DECIMAL(12,2) NOT NULL,    status ENUM(&#39;pending&#39;, &#39;confirmed&#39;, &#39;shipped&#39;, &#39;delivered&#39;, &#39;cancelled&#39;),        -- 外键约束    FOREIGN KEY (customer_id) REFERENCES customers(customer_id),        -- 查询优化索引    INDEX idx_customer_date (customer_id, order_date),    INDEX idx_status_date (status, order_date)) COMMENT=&#39;订单表（一对多）&#39;;</code></pre><p><strong>多对多关系设计：</strong></p><pre><code class="language-sql">-- 文章与标签的多对多关系CREATE TABLE articles (    article_id INT PRIMARY KEY AUTO_INCREMENT,    title VARCHAR(200) NOT NULL,    content TEXT NOT NULL,    author_id INT,    status ENUM(&#39;draft&#39;, &#39;published&#39;, &#39;archived&#39;) DEFAULT &#39;draft&#39;,    published_at DATETIME,    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,        INDEX idx_author_status (author_id, status),    INDEX idx_published (published_at)) COMMENT=&#39;文章表&#39;;CREATE TABLE tags (    tag_id INT PRIMARY KEY AUTO_INCREMENT,    tag_name VARCHAR(50) UNIQUE NOT NULL,    tag_slug VARCHAR(50) UNIQUE NOT NULL,    description VARCHAR(200),    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP) COMMENT=&#39;标签表&#39;;CREATE TABLE article_tags (    article_id INT NOT NULL,    tag_id INT NOT NULL,    assigned_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,        -- 复合主键    PRIMARY KEY (article_id, tag_id),        -- 外键约束    FOREIGN KEY (article_id) REFERENCES articles(article_id) ON DELETE CASCADE,    FOREIGN KEY (tag_id) REFERENCES tags(tag_id) ON DELETE CASCADE,        -- 双向查询优化    INDEX idx_tag_article (tag_id, article_id)) COMMENT=&#39;文章标签关联表（多对多）&#39;;</code></pre><h3 id="%E5%AD%97%E6%AE%B5%E9%80%89%E6%8B%A9%E4%B8%8E%E6%95%B0%E6%8D%AE%E7%B1%BB%E5%9E%8B%E4%BC%98%E5%8C%96" tabindex="-1">字段选择与数据类型优化</h3><p><strong>枚举与集合类型的选择：</strong></p><pre><code class="language-sql">-- 使用ENUM替代字符串CREATE TABLE tasks (    task_id INT PRIMARY KEY AUTO_INCREMENT,    title VARCHAR(200) NOT NULL,    priority ENUM(&#39;low&#39;, &#39;medium&#39;, &#39;high&#39;, &#39;critical&#39;) NOT NULL DEFAULT &#39;medium&#39;,    status ENUM(&#39;pending&#39;, &#39;in_progress&#39;, &#39;completed&#39;, &#39;cancelled&#39;) NOT NULL DEFAULT &#39;pending&#39;,        -- ENUM存储为数字，查询效率高    INDEX idx_priority_status (priority, status));-- 使用SET存储多选项CREATE TABLE user_preferences (    user_id INT PRIMARY KEY,    notification_types SET(&#39;email&#39;, &#39;sms&#39;, &#39;push&#39;, &#39;in_app&#39;) DEFAULT &#39;email&#39;,    language SET(&#39;zh_CN&#39;, &#39;en_US&#39;, &#39;ja_JP&#39;) DEFAULT &#39;zh_CN&#39;,        -- SET查询示例    CHECK (JSON_LENGTH(language) &gt; 0) -- 至少选择一种语言);-- SET查询技巧SELECT * FROM user_preferences WHERE FIND_IN_SET(&#39;sms&#39;, notification_types) &gt; 0;SELECT * FROM user_preferences WHERE notification_types = &#39;email,sms&#39;; -- 精确匹配</code></pre><p><strong>默认值与约束优化：</strong></p><pre><code class="language-sql">CREATE TABLE products (    product_id INT PRIMARY KEY AUTO_INCREMENT,    sku VARCHAR(50) UNIQUE NOT NULL,    name VARCHAR(200) NOT NULL,        -- 合理的默认值    status TINYINT DEFAULT 1,    stock_quantity INT DEFAULT 0,    min_stock_level INT DEFAULT 10,        -- 检查约束（MySQL 8.0.16+）    price DECIMAL(10,2) CHECK (price &gt;= 0),    weight_kg DECIMAL(8,3) CHECK (weight_kg &gt; 0),        -- 时间戳默认值    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,        -- 计算列（MySQL 5.7+）    need_reorder BOOLEAN GENERATED ALWAYS AS (stock_quantity &lt;= min_stock_level),        -- 索引优化    INDEX idx_sku_status (sku, status),    INDEX idx_stock (stock_quantity),    INDEX idx_need_reorder (need_reorder));</code></pre><h2 id="3.-%E7%B4%A2%E5%BC%95%E8%AE%BE%E8%AE%A1%E5%8E%9F%E7%90%86%E4%B8%8E%E4%BC%98%E5%8C%96" tabindex="-1">3. 索引设计原理与优化</h2><h3 id="b%2Btree%E7%B4%A2%E5%BC%95%E5%8E%9F%E7%90%86%E6%B7%B1%E5%BA%A6%E8%A7%A3%E6%9E%90" tabindex="-1">B+Tree索引原理深度解析</h3><p><strong>B+Tree结构特点：</strong></p><pre><code class="language-">B+Tree结构：├── 根节点 (Root Node)├── 内部节点 (Internal Nodes)└── 叶子节点 (Leaf Nodes)    ├── 数据页指针    ├── 键值对（有序）    └── 相邻叶子节点指针</code></pre><p><strong>B+Tree优势分析：</strong></p><ul><li><strong>平衡树结构</strong>：所有叶子节点在同一层，查询稳定</li><li><strong>顺序访问</strong>：叶子节点链表支持范围查询</li><li><strong>高扇出</strong>：减少树高度，提高查询效率</li><li><strong>数据集中</strong>：数据只存储在叶子节点</li></ul><h3 id="%E8%81%9A%E7%B0%87%E7%B4%A2%E5%BC%95%E4%B8%8E%E9%9D%9E%E8%81%9A%E7%B0%87%E7%B4%A2%E5%BC%95" tabindex="-1">聚簇索引与非聚簇索引</h3><p><strong>聚簇索引（InnoDB）：</strong></p><pre><code class="language-sql">-- InnoDB表的聚簇索引（通常是主键）CREATE TABLE employees (    emp_id INT PRIMARY KEY,          -- 聚簇索引    emp_name VARCHAR(100),    department_id INT,    salary DECIMAL(10,2),        -- 数据按emp_id物理排序存储    INDEX idx_department (department_id)  -- 非聚簇索引);-- 没有主键时，InnoDB的处理CREATE TABLE logs (    id BIGINT UNSIGNED AUTO_INCREMENT,    log_message TEXT,    created_at TIMESTAMP,        -- 如果没有主键，InnoDB会：    -- 1. 找第一个UNIQUE NOT NULL列    -- 2. 否则创建隐藏的_rowid列作为聚簇索引    UNIQUE KEY uk_id (id));</code></pre><p><strong>非聚簇索引（二级索引）结构：</strong></p><pre><code class="language-sql">-- 二级索引包含主键值CREATE TABLE orders (    order_id BIGINT PRIMARY KEY,           -- 聚簇索引键    customer_id BIGINT NOT NULL,    order_date DATE NOT NULL,    total_amount DECIMAL(12,2),        -- 二级索引存储(customer_id, order_id)    INDEX idx_customer_date (customer_id, order_date),        -- 覆盖索引示例    INDEX idx_customer_amount (customer_id, total_amount));-- 二级索引查询过程EXPLAIN SELECT * FROM orders WHERE customer_id = 1234 AND order_date = &#39;2023-01-01&#39;;-- 1. 在idx_customer_date找到(order_id)-- 2. 用order_id回表查询完整数据</code></pre><h3 id="%E5%A4%8D%E5%90%88%E7%B4%A2%E5%BC%95%E8%AE%BE%E8%AE%A1%E4%B8%8E%E6%9C%80%E5%B7%A6%E5%89%8D%E7%BC%80%E5%8E%9F%E5%88%99" tabindex="-1">复合索引设计与最左前缀原则</h3><p><strong>复合索引设计原则：</strong></p><pre><code class="language-sql">-- 用户行为日志表 - 复合索引设计CREATE TABLE user_actions (    user_id BIGINT NOT NULL,    action_type VARCHAR(50) NOT NULL,    action_time DATETIME NOT NULL,    device_type ENUM(&#39;web&#39;, &#39;ios&#39;, &#39;android&#39;),    page_url VARCHAR(500),        -- 复合索引设计    PRIMARY KEY (user_id, action_time, action_type),    INDEX idx_time_type (action_time, action_type),    INDEX idx_type_time (action_type, action_time),    INDEX idx_user_type_time (user_id, action_type, action_time));-- 最左前缀原则验证EXPLAIN SELECT * FROM user_actions WHERE user_id = 1234;  -- ✅ 使用索引EXPLAIN SELECT * FROM user_actions WHERE user_id = 1234 AND action_time &gt; &#39;2023-01-01&#39;;  -- ✅ 使用索引EXPLAIN SELECT * FROM user_actions WHERE action_time &gt; &#39;2023-01-01&#39;;  -- ❌ 无法使用主键索引EXPLAIN SELECT * FROM user_actions WHERE user_id = 1234 AND action_type = &#39;login&#39;;  -- ✅ 使用索引</code></pre><p><strong>索引选择性优化：</strong></p><pre><code class="language-sql">-- 计算索引选择性SELECT     COUNT(DISTINCT user_id) as distinct_users,    COUNT(*) as total_records,    ROUND(COUNT(DISTINCT user_id) / COUNT(*), 4) as selectivityFROM user_actions;-- 低选择性索引示例（不推荐）CREATE TABLE low_selectivity_demo (    gender ENUM(&#39;M&#39;, &#39;F&#39;),           -- 选择性差    status TINYINT DEFAULT 1,        -- 选择性差    created_date DATE,               -- 选择性随时间变差        -- 不推荐的索引    INDEX idx_gender (gender),        -- 推荐的复合索引    INDEX idx_status_date (status, created_date),    INDEX idx_gender_date (gender, created_date));</code></pre><h3 id="%E8%A6%86%E7%9B%96%E7%B4%A2%E5%BC%95%E4%B8%8E%E7%B4%A2%E5%BC%95%E4%B8%8B%E6%8E%A8%E4%BC%98%E5%8C%96" tabindex="-1">覆盖索引与索引下推优化</h3><p><strong>覆盖索引优化：</strong></p><pre><code class="language-sql">-- 覆盖索引设计CREATE TABLE sales (    sale_id BIGINT PRIMARY KEY,    product_id BIGINT NOT NULL,    sale_date DATE NOT NULL,    quantity INT NOT NULL,    unit_price DECIMAL(10,2) NOT NULL,    customer_id BIGINT NOT NULL,        -- 覆盖索引：包含查询所需的所有列    INDEX idx_product_date (product_id, sale_date),    INDEX idx_customer_product (customer_id, product_id, quantity, unit_price),    INDEX idx_date_customer (sale_date, customer_id, product_id));-- 覆盖索引查询示例EXPLAIN SELECT product_id, SUM(quantity) as total_quantityFROM sales WHERE sale_date BETWEEN &#39;2023-01-01&#39; AND &#39;2023-01-31&#39;GROUP BY product_id;-- ✅ 使用idx_date_customer，不需要回表EXPLAINSELECT customer_id, product_id, quantity, unit_priceFROM sales WHERE customer_id = 1234 AND product_id IN (1, 2, 3);-- ✅ 使用idx_customer_product，不需要回表</code></pre><p><strong>索引下推（ICP）优化：</strong></p><pre><code class="language-sql">-- 索引下推示例CREATE TABLE orders_icp (    order_id BIGINT PRIMARY KEY,    customer_id BIGINT NOT NULL,    status ENUM(&#39;pending&#39;, &#39;paid&#39;, &#39;shipped&#39;) NOT NULL,    total_amount DECIMAL(12,2),    created_at DATETIME,        INDEX idx_customer_status (customer_id, status));-- 没有ICP的查询（旧版本）SELECT * FROM orders_icp WHERE customer_id = 1234 AND status = &#39;paid&#39;;-- 1. 通过customer_id找到所有记录-- 2. 回表读取完整数据-- 3. 在Server层过滤status-- 有ICP的查询（MySQL 5.6+）SELECT * FROM orders_icp WHERE customer_id = 1234 AND status = &#39;paid&#39;;-- 1. 在存储引擎层直接过滤customer_id和status-- 2. 只回表符合条件的记录</code></pre><h3 id="%E7%B4%A2%E5%BC%95%E7%BB%B4%E6%8A%A4%E4%B8%8E%E9%87%8D%E5%BB%BA%E7%AD%96%E7%95%A5" tabindex="-1">索引维护与重建策略</h3><p><strong>索引监控与维护：</strong></p><pre><code class="language-sql">-- 索引使用情况监控SELECT     OBJECT_SCHEMA,    OBJECT_NAME,    INDEX_NAME,    COUNT_READ,    COUNT_FETCH,    COUNT_INSERT,    COUNT_UPDATE,    COUNT_DELETEFROM performance_schema.table_io_waits_summary_by_index_usageWHERE OBJECT_SCHEMA = &#39;your_database&#39;ORDER BY COUNT_READ DESC;-- 索引统计信息ANALYZE TABLE sales;  -- 更新统计信息SHOW INDEX FROM sales;-- 关注Cardinality（基数），值越接近记录数越好-- 索引碎片整理OPTIMIZE TABLE sales;  -- 重建表，整理碎片-- 在线DDL（MySQL 5.6+）ALTER TABLE sales DROP INDEX idx_old_index,ADD INDEX idx_new_index (customer_id, sale_date),ALGORITHM=INPLACE, LOCK=NONE;</code></pre><p><strong>索引设计检查清单：</strong></p><pre><code class="language-sql">-- 1. 为WHERE条件中的列创建索引-- 2. 为JOIN条件的列创建索引  -- 3. 为ORDER BY、GROUP BY的列创建索引-- 4. 考虑覆盖索引，避免回表-- 5. 使用复合索引，注意最左前缀-- 6. 避免在索引列上使用函数-- 7. 定期监控索引使用情况-- 索引使用分析EXPLAIN FORMAT=JSONSELECT o.order_id, c.customer_name, SUM(oi.quantity) as total_itemsFROM orders oJOIN customers c ON o.customer_id = c.customer_idJOIN order_items oi ON o.order_id = oi.order_idWHERE o.created_at &gt;= &#39;2023-01-01&#39;  AND o.status = &#39;completed&#39;GROUP BY o.order_id, c.customer_nameHAVING total_items &gt; 5ORDER BY o.created_at DESC;</code></pre><h2 id="%E6%80%BB%E7%BB%93" tabindex="-1">总结</h2><p>通过本篇的深入学习，我们掌握了MySQL数据类型选择和表设计的核心知识：</p><ol><li><strong>数据类型选择</strong>：根据业务需求选择最合适的类型，平衡存储空间和查询性能</li><li><strong>规范化设计</strong>：理解三大范式，知道何时应该反范式化优化性能</li><li><strong>关系设计</strong>：掌握一对一、一对多、多对多关系的实现方式</li><li><strong>索引原理</strong>：深入理解B+Tree、聚簇索引、覆盖索引的工作原理</li><li><strong>索引优化</strong>：掌握复合索引设计、最左前缀原则、索引下推等高级技巧</li></ol><p><strong>关键实践要点：</strong></p><ul><li>字符串类型：固定长度用CHAR，变长用VARCHAR，大文本用TEXT</li><li>数值类型：根据范围选择最小合适的类型，金融计算用DECIMAL</li><li>时间类型：业务时间用DATETIME，系统时间用TIMESTAMP</li><li>索引设计：遵循最左前缀，考虑覆盖索引，监控索引使用情况</li></ul><p><strong>动手练习：</strong></p><ol><li>为你当前的项目重新设计表结构，应用学到的数据类型和索引原则</li><li>分析现有表的索引使用情况，优化低效索引</li><li>尝试使用JSON类型存储半结构化数据</li><li>设计一个符合范式要求的数据库schema，并考虑性能优化</li></ol><p>欢迎在评论区分享你的表设计经验和遇到的问题！</p>]]>
                    </description>
                    <pubDate>Sat, 03 May 2025 04:33:08 EDT</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[MySql入门：MySQL核心概念与架构解析]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2948</link>
                    <description>
                            <![CDATA[<h1 id="mysql%E6%A0%B8%E5%BF%83%E6%A6%82%E5%BF%B5%E4%B8%8E%E6%9E%B6%E6%9E%84%E8%A7%A3%E6%9E%90" tabindex="-1">MySQL核心概念与架构解析</h1><blockquote><p>在现代应用开发中，数据库是系统的核心支柱。而MySQL作为世界上最流行的开源关系型数据库，其重要性不言而喻。今天，我们将深入探讨MySQL的核心概念和架构设计，为你揭开这个强大数据库系统的神秘面纱。</p></blockquote><h2 id="1.-mysql%E6%A6%82%E8%BF%B0%E4%B8%8E%E7%89%88%E6%9C%AC%E6%BC%94%E8%BF%9B" tabindex="-1">1. MySQL概述与版本演进</h2><h3 id="%E4%BB%80%E4%B9%88%E6%98%AFmysql%EF%BC%9F%E5%85%B3%E7%B3%BB%E5%9E%8B%E6%95%B0%E6%8D%AE%E5%BA%93%E7%9A%84%E6%A0%B8%E5%BF%83%E4%BB%B7%E5%80%BC" tabindex="-1">什么是MySQL？关系型数据库的核心价值</h3><p>MySQL是一个开源的关系型数据库管理系统（RDBMS），由瑞典MySQL AB公司开发，目前属于Oracle公司。它采用客户端-服务器模型，使用结构化查询语言（SQL）进行数据管理。</p><p><strong>关系型数据库的核心价值：</strong></p><pre><code class="language-sql">-- ACID特性的具体体现START TRANSACTION;-- 原子性(Atomicity)：要么全部成功，要么全部失败UPDATE accounts SET balance = balance - 100 WHERE user_id = 1;UPDATE accounts SET balance = balance + 100 WHERE user_id = 2;-- 一致性(Consistency)：始终满足业务规则约束-- 隔离性(Isolation)：事务间互不干扰-- 持久性(Durability)：提交后数据永久保存COMMIT;</code></pre><p><strong>MySQL的关键特性：</strong></p><ul><li>开源免费（社区版）</li><li>跨平台支持</li><li>支持多种存储引擎</li><li>强大的复制功能</li><li>丰富的生态系统</li></ul><h3 id="mysql%E5%8F%91%E5%B1%95%E5%8E%86%E7%A8%8B%E4%B8%8E%E9%87%8D%E8%A6%81%E7%89%88%E6%9C%AC%E7%89%B9%E6%80%A7" tabindex="-1">MySQL发展历程与重要版本特性</h3><p><strong>版本演进时间线：</strong></p><table><thead><tr><th>版本</th><th>发布时间</th><th>重要特性</th></tr></thead><tbody><tr><td>MySQL 3.23</td><td>2001年</td><td>引入InnoDB存储引擎</td></tr><tr><td>MySQL 4.0</td><td>2003年</td><td>联合查询、重写解析器</td></tr><tr><td>MySQL 5.0</td><td>2005年</td><td>视图、存储过程、触发器</td></tr><tr><td>MySQL 5.1</td><td>2008年</td><td>分区、事件调度器</td></tr><tr><td>MySQL 5.5</td><td>2010年</td><td>InnoDB成为默认引擎</td></tr><tr><td>MySQL 5.6</td><td>2013年</td><td>全文索引、NoSQL API</td></tr><tr><td>MySQL 5.7</td><td>2015年</td><td>原生JSON支持、多源复制</td></tr><tr><td>MySQL 8.0</td><td>2018年</td><td>窗口函数、CTE、角色管理</td></tr></tbody></table><h3 id="mysql-5.7-vs-8.0-%E6%A0%B8%E5%BF%83%E5%B7%AE%E5%BC%82%E5%AF%B9%E6%AF%94" tabindex="-1">MySQL 5.7 vs 8.0 核心差异对比</h3><pre><code class="language-sql">-- MySQL 5.7 特性示例SELECT * FROM users WHERE JSON_EXTRACT(profile, &#39;$.age&#39;) &gt; 25;-- MySQL 8.0 新特性示例-- 窗口函数SELECT     name,     salary,    AVG(salary) OVER (PARTITION BY department_id) as avg_dept_salaryFROM employees;-- 公用表表达式(CTE)WITH department_stats AS (    SELECT         department_id,        AVG(salary) as avg_salary    FROM employees     GROUP BY department_id)SELECT * FROM department_stats WHERE avg_salary &gt; 5000;-- 角色管理CREATE ROLE read_only;GRANT SELECT ON company.* TO read_only;GRANT read_only TO &#39;report_user&#39;@&#39;%&#39;;</code></pre><p><strong>性能对比：</strong></p><ul><li>MySQL 8.0 在读写并发性能上提升约30%</li><li>更好的JSON处理性能</li><li>改进的优化器，更准确的成本估算</li></ul><h3 id="mysql%E5%9C%A8%E7%8E%B0%E4%BB%A3%E5%BA%94%E7%94%A8%E6%9E%B6%E6%9E%84%E4%B8%AD%E7%9A%84%E5%AE%9A%E4%BD%8D" tabindex="-1">MySQL在现代应用架构中的定位</h3><p>在现代微服务架构中，MySQL扮演着重要角色：</p><pre>TypeError: Cannot read properties of undefined (reading 'v')</pre><h2 id="2.-mysql%E4%BD%93%E7%B3%BB%E6%9E%B6%E6%9E%84%E6%B7%B1%E5%BA%A6%E8%A7%A3%E6%9E%90" tabindex="-1">2. MySQL体系架构深度解析</h2><h3 id="%E6%95%B4%E4%BD%93%E6%9E%B6%E6%9E%84%E6%A6%82%E8%A7%88" tabindex="-1">整体架构概览</h3><p>MySQL采用经典的客户端-服务器架构，其核心组件包括：</p><pre><code class="language-">MySQL Architecture:├── 连接层 (Connection Layer)├── SQL层 (SQL Layer)│   ├── 连接池│   ├── 查询解析器│   ├── 查询优化器│   ├── 查询执行器│   └── 缓存└── 存储引擎层 (Storage Engine Layer)    ├── InnoDB (默认)    ├── MyISAM    ├── Memory    └── 其他引擎</code></pre><h3 id="%E8%BF%9E%E6%8E%A5%E5%B1%82%EF%BC%9A%E8%BF%9E%E6%8E%A5%E6%B1%A0%E3%80%81%E8%BA%AB%E4%BB%BD%E9%AA%8C%E8%AF%81%E3%80%81%E7%BA%BF%E7%A8%8B%E7%AE%A1%E7%90%86" tabindex="-1">连接层：连接池、身份验证、线程管理</h3><p><strong>连接处理机制：</strong></p><pre><code class="language-csharp">public class MySQLConnectionPool{    // MySQL使用线程池处理连接    private const int MAX_CONNECTIONS = 151; // 默认最大连接数        public void HandleConnection(ClientConnection client)    {        // 1. 连接验证        if (!Authenticate(client.Username, client.Password))            throw new AuthenticationException();                    // 2. 权限检查        if (!CheckPrivileges(client.Username, client.Database))            throw new AccessDeniedException();                    // 3. 创建会话        var session = CreateSession(client);                // 4. 线程分配（一对一或线程池）        AssignThreadToSession(session);    }}</code></pre><p><strong>连接状态监控：</strong></p><pre><code class="language-sql">-- 查看当前连接信息SHOW PROCESSLIST;-- 查看连接统计SHOW STATUS LIKE &#39;Threads_%&#39;;-- 输出示例：-- Threads_cached: 10      -- 缓存中的线程数-- Threads_connected: 25   -- 当前连接数-- Threads_created: 1000   -- 已创建线程总数-- Threads_running: 5      -- 活跃线程数</code></pre><h3 id="sql%E5%B1%82%EF%BC%9A%E6%9F%A5%E8%AF%A2%E8%A7%A3%E6%9E%90%E3%80%81%E4%BC%98%E5%8C%96%E5%99%A8%E3%80%81%E6%89%A7%E8%A1%8C%E5%99%A8%E5%B7%A5%E4%BD%9C%E5%8E%9F%E7%90%86" tabindex="-1">SQL层：查询解析、优化器、执行器工作原理</h3><p><strong>SQL查询处理流程：</strong></p><pre><code class="language-sql">-- 示例查询SELECT u.name, COUNT(o.id) as order_countFROM users uJOIN orders o ON u.id = o.user_idWHERE u.created_at &gt; &#39;2023-01-01&#39;GROUP BY u.idHAVING order_count &gt; 5ORDER BY order_count DESCLIMIT 10;</code></pre><p><strong>处理步骤详解：</strong></p><ol><li><p><strong>查询解析（Parser）</strong></p><ul><li>语法分析：检查SQL语法正确性</li><li>词法分析：将SQL分解为标记（tokens）</li><li>生成解析树</li></ul></li><li><p><strong>查询优化（Optimizer）</strong></p><pre><code class="language-sql">-- 使用EXPLAIN查看优化器决策EXPLAIN FORMAT=JSON SELECT u.name, COUNT(o.id) as order_countFROM users uJOIN orders o ON u.id = o.user_idWHERE u.created_at &gt; &#39;2023-01-01&#39;GROUP BY u.idHAVING order_count &gt; 5;</code></pre></li><li><p><strong>查询执行（Executor）</strong></p><ul><li>根据执行计划访问存储引擎</li><li>应用WHERE条件过滤</li><li>执行JOIN操作</li><li>进行GROUP BY和聚合</li><li>应用HAVING条件</li><li>排序和限制结果</li></ul></li></ol><h3 id="%E5%AD%98%E5%82%A8%E5%BC%95%E6%93%8E%E5%B1%82%EF%BC%9A%E6%8F%92%E4%BB%B6%E5%BC%8F%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1" tabindex="-1">存储引擎层：插件式架构设计</h3><p>MySQL的存储引擎采用插件式架构，允许为不同表选择不同存储引擎：</p><pre><code class="language-sql">-- 创建表时指定存储引擎CREATE TABLE users (    id INT PRIMARY KEY AUTO_INCREMENT,    name VARCHAR(100),    email VARCHAR(255),    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP) ENGINE=InnoDB;-- 查看表的存储引擎SHOW TABLE STATUS LIKE &#39;users&#39;;</code></pre><p><strong>存储引擎对比：</strong></p><table><thead><tr><th>特性</th><th>InnoDB</th><th>MyISAM</th><th>Memory</th></tr></thead><tbody><tr><td>事务支持</td><td>✅</td><td>❌</td><td>❌</td></tr><tr><td>行级锁</td><td>✅</td><td>❌</td><td>✅</td></tr><tr><td>外键支持</td><td>✅</td><td>❌</td><td>❌</td></tr><tr><td>崩溃恢复</td><td>✅</td><td>⚠️</td><td>❌</td></tr><tr><td>全文索引</td><td>✅ (5.6+)</td><td>✅</td><td>❌</td></tr><tr><td>适用场景</td><td>事务型应用</td><td>读密集型</td><td>临时数据</td></tr></tbody></table><h3 id="innodb%E5%AD%98%E5%82%A8%E5%BC%95%E6%93%8E%E6%9E%B6%E6%9E%84%E8%AF%A6%E8%A7%A3" tabindex="-1">InnoDB存储引擎架构详解</h3><p>InnoDB是MySQL的默认存储引擎，其架构设计非常精妙：</p><pre><code class="language-">InnoDB Architecture:├── 内存结构 (In-Memory Structures)│   ├── Buffer Pool (缓冲池)│   ├── Change Buffer (变更缓冲)│   ├── Adaptive Hash Index (自适应哈希索引)│   ├── Log Buffer (日志缓冲)│   └── Additional Memory Pool└── 磁盘结构 (On-Disk Structures)    ├── 表空间 (Tablespaces)    │   ├── 系统表空间    │   ├── 独立表空间    │   ├── 通用表空间    │   └── 临时表空间    ├── 重做日志 (Redo Logs)    ├── 撤销日志 (Undo Logs)    └── 二进制日志 (Binary Logs)</code></pre><p><strong>Buffer Pool工作机制：</strong></p><pre><code class="language-sql">-- 查看Buffer Pool状态SHOW ENGINE INNODB STATUS\G-- Buffer Pool相关配置SELECT @@innodb_buffer_pool_size;      -- 缓冲池大小SELECT @@innodb_buffer_pool_instances; -- 缓冲池实例数-- 监控Buffer Pool命中率SELECT     (1 - (Variable_value / (SELECT Variable_value                            FROM information_schema.global_status                            WHERE Variable_name = &#39;Innodb_pages_read&#39;))) * 100 as hit_rateFROM information_schema.global_status WHERE Variable_name = &#39;Innodb_buffer_pool_reads&#39;;</code></pre><h3 id="%E5%86%85%E5%AD%98%E7%BB%93%E6%9E%84%E4%B8%8E%E7%A3%81%E7%9B%98%E5%AD%98%E5%82%A8%E6%9C%BA%E5%88%B6" tabindex="-1">内存结构与磁盘存储机制</h3><p><strong>内存管理：</strong></p><pre><code class="language-csharp">public class InnoDBMemoryManager{    // Buffer Pool - 数据页缓存    private Dictionary&lt;PageId, DataPage&gt; bufferPool;        // Change Buffer - 非唯一索引变更缓存    private Dictionary&lt;IndexId, IndexChange&gt; changeBuffer;        // Log Buffer - 重做日志缓冲    private CircularBuffer&lt;RedoLogRecord&gt; logBuffer;        public DataPage ReadPage(PageId pageId)    {        // 1. 检查Buffer Pool        if (bufferPool.ContainsKey(pageId))            return bufferPool[pageId];                    // 2. 从磁盘读取        var page = diskStorage.ReadPage(pageId);                // 3. 使用LRU算法管理缓存        if (bufferPool.Count &gt;= maxSize)            EvictLeastRecentlyUsedPage();                    bufferPool[pageId] = page;        return page;    }}</code></pre><p><strong>磁盘存储结构：</strong></p><pre><code class="language-sql">-- 表空间文件结构-- 系统表空间: ibdata1-- 独立表空间: db/table.ibd-- 查看表空间信息SELECT     table_name,    engine,    table_rows,    avg_row_length,    data_length,    index_length,    data_freeFROM information_schema.tables WHERE table_schema = &#39;your_database&#39;;</code></pre><h2 id="3.-%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2%E4%B8%8E%E9%85%8D%E7%BD%AE%E4%BC%98%E5%8C%96" tabindex="-1">3. 安装部署与配置优化</h2><h3 id="%E5%A4%9A%E5%B9%B3%E5%8F%B0%E5%AE%89%E8%A3%85%E6%8C%87%E5%8D%97" tabindex="-1">多平台安装指南</h3><p><strong>Linux安装（Ubuntu为例）：</strong></p><pre><code class="language-bash"># 更新包管理器sudo apt update# 安装MySQL服务器sudo apt install mysql-server-8.0# 安全配置sudo mysql_secure_installation# 启动服务sudo systemctl start mysqlsudo systemctl enable mysql# 验证安装mysql --version</code></pre><p><strong>Docker部署：</strong></p><pre><code class="language-yaml"># docker-compose.ymlversion: &#39;3.8&#39;services:  mysql:    image: mysql:8.0    container_name: mysql-server    environment:      MYSQL_ROOT_PASSWORD: your_secure_password      MYSQL_DATABASE: app_db      MYSQL_USER: app_user      MYSQL_PASSWORD: app_password    ports:      - &quot;3306:3306&quot;    volumes:      - mysql_data:/var/lib/mysql      - ./conf.d:/etc/mysql/conf.d    command:       - --default-authentication-plugin=mysql_native_password      - --character-set-server=utf8mb4      - --collation-server=utf8mb4_unicode_civolumes:  mysql_data:</code></pre><h3 id="%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E8%AF%A6%E8%A7%A3%EF%BC%88my.cnf%2Fmy.ini%EF%BC%89" tabindex="-1">配置文件详解（my.cnf/my.ini）</h3><p><strong>生产环境配置示例：</strong></p><pre><code class="language-ini">[mysqld]# 基础配置datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockport=3306# 内存配置innodb_buffer_pool_size=16G           # 建议为系统内存的70-80%innodb_log_file_size=2G               # 重做日志文件大小innodb_log_buffer_size=256M           # 日志缓冲区大小# 连接配置max_connections=1000                  # 最大连接数thread_cache_size=100                 # 线程缓存大小table_open_cache=4000                 # 表缓存大小# InnoDB配置innodb_file_per_table=ON              # 每个表独立表空间innodb_flush_log_at_trx_commit=1      # 事务提交时刷盘innodb_flush_method=O_DIRECT          # I/O方式innodb_buffer_pool_instances=8        # 缓冲池实例数# 复制配置（如果使用主从）server_id=1log_bin=mysql-binbinlog_format=ROW# 性能配置query_cache_type=0                    # 8.0已移除查询缓存sort_buffer_size=2Mread_buffer_size=2Mread_rnd_buffer_size=2M[mysql]default-character-set=utf8mb4[client]default-character-set=utf8mb4</code></pre><h3 id="%E7%B3%BB%E7%BB%9F%E5%8F%82%E6%95%B0%E8%B0%83%E4%BC%98%E5%AE%9E%E6%88%98" tabindex="-1">系统参数调优实战</h3><p><strong>性能诊断查询：</strong></p><pre><code class="language-sql">-- 查看关键性能指标SHOW STATUS WHERE &#96;variable_name&#96; IN (    &#39;Questions&#39;, &#39;Com_select&#39;, &#39;Com_insert&#39;, &#39;Com_update&#39;, &#39;Com_delete&#39;,    &#39;Innodb_buffer_pool_reads&#39;, &#39;Innodb_buffer_pool_read_requests&#39;,    &#39;Threads_connected&#39;, &#39;Threads_running&#39;,    &#39;Key_reads&#39;, &#39;Key_read_requests&#39;);-- 计算缓冲池命中率SELECT     ROUND(1 - (variable_value / (        SELECT variable_value         FROM information_schema.global_status         WHERE variable_name = &#39;innodb_buffer_pool_read_requests&#39;    )), 4) * 100 as buffer_pool_hit_rateFROM information_schema.global_status WHERE variable_name = &#39;innodb_buffer_pool_reads&#39;;-- 检查慢查询SHOW VARIABLES LIKE &#39;slow_query_log%&#39;;SHOW VARIABLES LIKE &#39;long_query_time&#39;;</code></pre><h3 id="%E5%AE%89%E5%85%A8%E9%85%8D%E7%BD%AE%E4%B8%8E%E6%9D%83%E9%99%90%E7%AE%A1%E7%90%86" tabindex="-1">安全配置与权限管理</h3><p><strong>基础安全配置：</strong></p><pre><code class="language-sql">-- 创建应用用户（遵循最小权限原则）CREATE USER &#39;app_user&#39;@&#39;192.168.1.%&#39; IDENTIFIED BY &#39;secure_password_123&#39;;-- 授予精确权限GRANT SELECT, INSERT, UPDATE, DELETE ON app_db.* TO &#39;app_user&#39;@&#39;192.168.1.%&#39;;-- 创建只读用户用于报表CREATE USER &#39;report_user&#39;@&#39;%&#39; IDENTIFIED BY &#39;readonly_password&#39;;GRANT SELECT ON app_db.* TO &#39;report_user&#39;@&#39;%&#39;;-- 查看用户权限SHOW GRANTS FOR &#39;app_user&#39;@&#39;192.168.1.%&#39;;-- 密码策略配置SET GLOBAL validate_password.policy = MEDIUM;SET GLOBAL validate_password.length = 12;</code></pre><p><strong>网络安全配置：</strong></p><pre><code class="language-sql">-- 限制连接来源RENAME USER &#39;root&#39;@&#39;%&#39; TO &#39;root&#39;@&#39;localhost&#39;;-- 删除测试数据库和匿名用户DROP DATABASE IF EXISTS test;DELETE FROM mysql.user WHERE User = &#39;&#39;;-- 刷新权限FLUSH PRIVILEGES;</code></pre><h3 id="%E7%9B%91%E6%8E%A7%E5%B7%A5%E5%85%B7%E4%B8%8E%E6%80%A7%E8%83%BD%E5%9F%BA%E7%BA%BF%E5%BB%BA%E7%AB%8B" tabindex="-1">监控工具与性能基线建立</h3><p><strong>系统监控查询：</strong></p><pre><code class="language-sql">-- 性能模式监控（MySQL 5.6+）SELECT * FROM performance_schema.events_statements_summary_by_digest ORDER BY sum_timer_wait DESC LIMIT 10;-- 查看锁信息SELECT * FROM information_schema.INNODB_LOCKS;SELECT * FROM information_schema.INNODB_LOCK_WAITS;-- 表统计信息SELECT     table_name,    table_rows,    data_length,    index_length,    ROUND((data_length + index_length) / 1024 / 1024, 2) as total_size_mbFROM information_schema.tables WHERE table_schema = &#39;your_database&#39;ORDER BY total_size_mb DESC;</code></pre><p><strong>建立性能基线：</strong></p><pre><code class="language-sql">-- 创建性能基线表CREATE TABLE performance_baseline (    id INT AUTO_INCREMENT PRIMARY KEY,    metric_name VARCHAR(100),    metric_value DECIMAL(20,4),    collected_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,    notes TEXT);-- 收集基线数据INSERT INTO performance_baseline (metric_name, metric_value)SELECT     &#39;qps&#39; as metric_name,    VARIABLE_VALUE as metric_valueFROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME = &#39;Queries&#39;;-- 定期收集其他关键指标...</code></pre><h2 id="%E6%80%BB%E7%BB%93" tabindex="-1">总结</h2><p>通过本篇的学习，我们深入了解了MySQL的核心概念和架构设计：</p><ol><li><strong>MySQL的演进历程</strong>：从简单的数据库系统发展到功能丰富的企业级解决方案</li><li><strong>分层架构设计</strong>：连接层、SQL层、存储引擎层的明确分工</li><li><strong>InnoDB的核心地位</strong>：作为默认存储引擎的先进特性</li><li><strong>配置优化原则</strong>：根据硬件和工作负载进行针对性调优</li><li><strong>安全最佳实践</strong>：权限最小化和网络安全配置</li></ol><p><strong>关键收获：</strong></p><ul><li>理解MySQL的架构有助于更好地进行性能调优和故障排查</li><li>合理的配置可以显著提升数据库性能和稳定性</li><li>安全配置不是可选项，而是生产部署的必备条件</li></ul><p>在接下来的篇章中，我们将深入探讨MySQL的数据类型、表设计、索引优化等高级主题，帮助你构建高性能的数据库应用。</p><p><strong>思考与实践：</strong></p><ol><li>在你的环境中安装MySQL 8.0，并尝试不同的配置参数</li><li>使用性能模式监控数据库的运行状态</li><li>设计一个符合最小权限原则的用户权限体系</li><li>建立关键性能指标的监控基线</li></ol><p>欢迎在评论区分享你的MySQL配置经验和遇到的问题！</p>]]>
                    </description>
                    <pubDate>Sat, 26 Apr 2025 09:14:44 EDT</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[Redis配置文件及常用命令详解]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2955</link>
                    <description>
                            <![CDATA[<h1 id="redis%E5%AE%8C%E5%85%A8%E6%8C%87%E5%8D%97%EF%BC%9A%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E8%AF%A6%E8%A7%A3%E4%B8%8E%E5%B8%B8%E7%94%A8%E5%91%BD%E4%BB%A4%E5%A4%A7%E5%85%A8" tabindex="-1">Redis完全指南：配置文件详解与常用命令大全</h1><blockquote><p>本文深入解析Redis核心配置，并提供全面的命令参考手册，助你彻底掌握Redis使用技巧。</p></blockquote><h2 id="%F0%9F%93%96-%E6%A6%82%E8%BF%B0" tabindex="-1">📖 概述</h2><p>Redis作为高性能的键值数据库，在缓存、消息队列、会话存储等场景中广泛应用。掌握其配置文件和常用命令是每个开发者必备的技能。</p><h2 id="%E2%9A%99%EF%B8%8F-redis%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E6%B7%B1%E5%BA%A6%E8%A7%A3%E6%9E%90" tabindex="-1">⚙️ Redis配置文件深度解析</h2><h3 id="%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E4%BD%8D%E7%BD%AE%E4%B8%8E%E5%8A%A0%E8%BD%BD" tabindex="-1">配置文件位置与加载</h3><pre><code class="language-bash"># 默认配置文件路径/etc/redis/redis.conf# 指定配置文件启动redis-server /path/to/your/redis.conf# 检查当前配置redis-cli config get *</code></pre><h3 id="%E6%A0%B8%E5%BF%83%E9%85%8D%E7%BD%AE%E9%A1%B9%E8%AF%A6%E8%A7%A3" tabindex="-1">核心配置项详解</h3><h4 id="%F0%9F%94%92-%E7%BD%91%E7%BB%9C%E4%B8%8E%E5%AE%89%E5%85%A8%E9%85%8D%E7%BD%AE" tabindex="-1">🔒 网络与安全配置</h4><pre><code class="language-bash"># 绑定IP地址（生产环境建议指定）bind 127.0.0.1 192.168.1.100# 端口配置port 6379# 保护模式（外网访问需关闭）protected-mode no# 连接密码requirepass &quot;your_strong_password_here&quot;# 最大连接数maxclients 10000</code></pre><p><strong>💡 生产环境建议</strong>：</p><ul><li>务必设置强密码</li><li>限制绑定IP，避免暴露到公网</li><li>适当调整最大连接数</li></ul><h4 id="%F0%9F%92%BE-%E6%8C%81%E4%B9%85%E5%8C%96%E9%85%8D%E7%BD%AE" tabindex="-1">💾 持久化配置</h4><p><strong>RDB持久化配置</strong>：</p><pre><code class="language-bash"># 自动保存条件save 900 1      # 15分钟内至少1个变更save 300 10     # 5分钟内至少10个变更  save 60 10000   # 1分钟内至少10000个变更# RDB文件配置dbfilename dump.rdbdir /var/lib/redis# 压缩配置rdbcompression yesrdbchecksum yes</code></pre><p><strong>AOF持久化配置</strong>：</p><pre><code class="language-bash"># 开启AOFappendonly yesappendfilename &quot;appendonly.aof&quot;# 同步策略appendfsync everysec    # 推荐配置# AOF重写配置auto-aof-rewrite-percentage 100auto-aof-rewrite-min-size 64mb</code></pre><p><strong>🎯 持久化策略选择</strong>：</p><ul><li><strong>缓存场景</strong>：仅使用RDB</li><li><strong>数据安全要求高</strong>：RDB+AOF</li><li><strong>性能优先</strong>：调整AOF同步策略为everysec</li></ul><h4 id="%F0%9F%A7%A0-%E5%86%85%E5%AD%98%E7%AE%A1%E7%90%86%E9%85%8D%E7%BD%AE" tabindex="-1">🧠 内存管理配置</h4><pre><code class="language-bash"># 内存限制maxmemory 2gb# 内存淘汰策略maxmemory-policy volatile-lru# 淘汰策略说明：# volatile-lru    -&gt; 从过期键中淘汰最近最少使用# allkeys-lru     -&gt; 从所有键中淘汰最近最少使用  # volatile-ttl    -&gt; 从过期键中淘汰存活时间最短# volatile-random -&gt; 从过期键中随机淘汰# noeviction      -&gt; 不淘汰，返回错误</code></pre><h3 id="%E6%80%A7%E8%83%BD%E4%BC%98%E5%8C%96%E9%85%8D%E7%BD%AE" tabindex="-1">性能优化配置</h3><pre><code class="language-bash"># 内核参数优化vm.overcommit_memory = 1# 禁用透明大页echo never &gt; /sys/kernel/mm/transparent_hugepage/enabled# 网络优化tcp-backlog 511timeout 0tcp-keepalive 300</code></pre><h2 id="%E2%8C%A8%EF%B8%8F-redis%E5%B8%B8%E7%94%A8%E5%91%BD%E4%BB%A4%E5%A4%A7%E5%85%A8" tabindex="-1">⌨️ Redis常用命令大全</h2><h3 id="1.-%F0%9F%94%91-%E9%94%AE(key)%E6%93%8D%E4%BD%9C%E5%91%BD%E4%BB%A4" tabindex="-1">1. 🔑 键(Key)操作命令</h3><h4 id="%E5%9F%BA%E7%A1%80%E6%93%8D%E4%BD%9C" tabindex="-1">基础操作</h4><pre><code class="language-bash"># 设置键值（带过期时间）SET user:1001 &quot;John Doe&quot; EX 3600# 批量操作MSET user:1001 &quot;John&quot; user:1002 &quot;Jane&quot; user:1003 &quot;Bob&quot;# 获取值GET user:1001# 删除键DEL user:1001 user:1002</code></pre><h4 id="%E9%AB%98%E7%BA%A7%E7%89%B9%E6%80%A7" tabindex="-1">高级特性</h4><pre><code class="language-bash"># 设置带过期时间的键（原子操作）SETEX session:token 1800 &quot;encrypted_data&quot;# 仅当键不存在时设置（分布式锁基础）SETNX lock:resource_1 &quot;owner_id&quot;# 获取并设置（原子操作）GETSET counter:clicks &quot;100&quot;</code></pre><h3 id="2.-%F0%9F%93%9D-%E5%AD%97%E7%AC%A6%E4%B8%B2(string)%E6%93%8D%E4%BD%9C" tabindex="-1">2. 📝 字符串(String)操作</h3><pre><code class="language-bash"># 数值操作INCR article:1001:views    # 阅读量+1INCRBY user:1001:points 10 # 积分+10DECR inventory:item_001    # 库存-1# 字符串操作APPEND user:1001:bio &quot; Additional info&quot;STRLEN user:1001:name      # 字符串长度GETRANGE user:1001:bio 0 4 # 获取子串</code></pre><h3 id="3.-%F0%9F%97%82%EF%B8%8F-%E5%93%88%E5%B8%8C(hash)%E6%93%8D%E4%BD%9C" tabindex="-1">3. 🗂️ 哈希(Hash)操作</h3><p><strong>用户信息存储示例</strong>：</p><pre><code class="language-bash"># 设置用户信息HSET user:1001 name &quot;John&quot; age 30 email &quot;john@example.com&quot;# 批量设置HMSET product:1001 name &quot;Laptop&quot; price 999.99 stock 50 category &quot;Electronics&quot;# 获取信息HGET user:1001 nameHMGET user:1001 name age emailHGETALL user:1001# 数值操作HINCRBY user:1001:stats login_count 1HINCRBYFLOAT product:1001 price -50.5</code></pre><h3 id="4.-%F0%9F%93%8B-%E5%88%97%E8%A1%A8(list)%E6%93%8D%E4%BD%9C" tabindex="-1">4. 📋 列表(List)操作</h3><p><strong>消息队列实现</strong>：</p><pre><code class="language-bash"># 生产者：推送消息LPUSH message:queue &quot;task_1&quot;LPUSH message:queue &quot;task_2&quot;# 消费者：获取消息RPOP message:queue# 阻塞式获取（推荐）BRPOP message:queue 30# 查看队列LRANGE message:queue 0 -1LLEN message:queue</code></pre><h3 id="5.-%F0%9F%94%84-%E9%9B%86%E5%90%88(set)%E6%93%8D%E4%BD%9C" tabindex="-1">5. 🔄 集合(Set)操作</h3><p><strong>标签系统实现</strong>：</p><pre><code class="language-bash"># 添加标签SADD article:1001:tags &quot;tech&quot; &quot;programming&quot; &quot;redis&quot;SADD user:1001:interests &quot;coding&quot; &quot;gaming&quot;# 查找共同兴趣SINTER user:1001:interests user:1002:interests# 推荐相关文章SUNION article:1001:tags article:1002:tags# 随机推荐SRANDMEMBER article:1001:tags 3</code></pre><h3 id="6.-%F0%9F%93%8A-%E6%9C%89%E5%BA%8F%E9%9B%86%E5%90%88(sorted-set)%E6%93%8D%E4%BD%9C" tabindex="-1">6. 📊 有序集合(Sorted Set)操作</h3><p><strong>排行榜实现</strong>：</p><pre><code class="language-bash"># 添加分数ZADD leaderboard 1500 &quot;player_1&quot;ZADD leaderboard 3200 &quot;player_2&quot; 2800 &quot;player_3&quot;# 获取排名ZREVRANGE leaderboard 0 9 WITHSCORES  # 前10名ZRANK leaderboard &quot;player_1&quot;          # 升序排名ZREVRANK leaderboard &quot;player_1&quot;       # 降序排名# 范围查询ZRANGEBYSCORE leaderboard 2000 3000 WITHSCORES</code></pre><h2 id="%F0%9F%9B%A0%EF%B8%8F-%E5%AE%9E%E6%88%98%E5%BA%94%E7%94%A8%E5%9C%BA%E6%99%AF" tabindex="-1">🛠️ 实战应用场景</h2><h3 id="%E7%BC%93%E5%AD%98%E7%AD%96%E7%95%A5%E5%AE%9E%E7%8E%B0" tabindex="-1">缓存策略实现</h3><pre><code class="language-bash"># 缓存查询结果SETEX cache:user:1001:profile 300 &quot;{user_data}&quot;# 缓存穿透防护SETNX cache_mutex:user:9999 1 EX 5</code></pre><h3 id="%E5%88%86%E5%B8%83%E5%BC%8F%E4%BC%9A%E8%AF%9D" tabindex="-1">分布式会话</h3><pre><code class="language-bash"># 存储会话HSET session:abc123 user_id 1001 last_active 1635789000EXPIRE session:abc123 1800# 更新活跃时间EXPIRE session:abc123 1800</code></pre><h3 id="%E9%99%90%E6%B5%81%E5%99%A8%E5%AE%9E%E7%8E%B0" tabindex="-1">限流器实现</h3><pre><code class="language-bash"># 简单限流INCR rate_limit:api:1001EXPIRE rate_limit:api:1001 60# 复杂限流（使用Lua脚本）EVAL &quot;local current = redis.call(&#39;incr&#39;, KEYS[1]) if current == 1 then redis.call(&#39;expire&#39;, KEYS[1], ARGV[1]) end return current&quot; 1 rate_limit:complex 60</code></pre><h2 id="%F0%9F%93%88-%E7%9B%91%E6%8E%A7%E4%B8%8E%E7%BB%B4%E6%8A%A4%E5%91%BD%E4%BB%A4" tabindex="-1">📈 监控与维护命令</h2><h3 id="%E7%B3%BB%E7%BB%9F%E7%8A%B6%E6%80%81%E6%A3%80%E6%9F%A5" tabindex="-1">系统状态检查</h3><pre><code class="language-bash"># 基础信息redis-cli info# 内存分析redis-cli info memory# 持久化状态redis-cli info persistence# 查看慢查询redis-cli slowlog get 10</code></pre><h3 id="%E6%80%A7%E8%83%BD%E7%9B%91%E6%8E%A7" tabindex="-1">性能监控</h3><pre><code class="language-bash"># 实时监控redis-cli monitor# 客户端连接管理redis-cli client listredis-cli client kill 127.0.0.1:53422# 内存分析redis-cli --bigkeysredis-cli --memkeys</code></pre><h3 id="%E5%A4%87%E4%BB%BD%E4%B8%8E%E6%81%A2%E5%A4%8D" tabindex="-1">备份与恢复</h3><pre><code class="language-bash"># 手动RDB备份redis-cli bgsave# AOF重写redis-cli bgrewriteaof# 数据迁移redis-cli --rdb dump.rdb</code></pre><h2 id="%F0%9F%9A%80-%E6%80%A7%E8%83%BD%E4%BC%98%E5%8C%96%E6%8A%80%E5%B7%A7" tabindex="-1">🚀 性能优化技巧</h2><h3 id="1.-%E8%BF%9E%E6%8E%A5%E6%B1%A0%E9%85%8D%E7%BD%AE" tabindex="-1">1. 连接池配置</h3><pre><code class="language-python"># Python示例import redispool = redis.ConnectionPool(    max_connections=50,    host=&#39;localhost&#39;,     port=6379,    decode_responses=True)r = redis.Redis(connection_pool=pool)</code></pre><h3 id="2.-%E7%AE%A1%E9%81%93(pipeline)%E4%BC%98%E5%8C%96" tabindex="-1">2. 管道(Pipeline)优化</h3><pre><code class="language-python"># 批量操作，减少网络往返pipe = r.pipeline()for user_id in user_ids:    pipe.hgetall(f&quot;user:{user_id}&quot;)results = pipe.execute()</code></pre><h3 id="3.-lua%E8%84%9A%E6%9C%AC%E4%BD%BF%E7%94%A8" tabindex="-1">3. Lua脚本使用</h3><pre><code class="language-bash"># 原子性操作示例EVAL &quot;local current = redis.call(&#39;get&#39;, KEYS[1]) if current then return redis.call(&#39;incr&#39;, KEYS[1]) else return nil end&quot; 1 counter:test</code></pre><h2 id="%E2%9A%A0%EF%B8%8F-%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98%E6%8E%92%E6%9F%A5" tabindex="-1">⚠️ 常见问题排查</h2><h3 id="%E5%86%85%E5%AD%98%E9%97%AE%E9%A2%98" tabindex="-1">内存问题</h3><pre><code class="language-bash"># 查看内存使用详情redis-cli info memory | grep used_memory_human# 查找大Keyredis-cli --bigkeys# 内存碎片率redis-cli info memory | grep mem_fragmentation_ratio</code></pre><h3 id="%E8%BF%9E%E6%8E%A5%E9%97%AE%E9%A2%98" tabindex="-1">连接问题</h3><pre><code class="language-bash"># 查看连接数redis-cli info clients# 客户端列表redis-cli client list# 网络统计redis-cli info stats | grep -E &quot;(total_connections_received|rejected_connections)&quot;</code></pre><h2 id="%F0%9F%93%9A-%E6%80%BB%E7%BB%93" tabindex="-1">📚 总结</h2><p>通过本文的学习，你应该已经掌握了：</p><ul><li>✅ Redis配置文件的各项参数含义及调优方法</li><li>✅ 各类数据结构的适用场景及操作命令</li><li>✅ 常见业务场景的Redis实现方案</li><li>✅ 性能监控与问题排查技巧</li></ul><p>Redis的强大之处在于其丰富的数据结构和原子操作，合理运用可以极大提升系统性能。建议在实际项目中多实践，逐步深入理解各个特性的使用场景。</p><hr /><p>欢迎在评论区留言交流，如果你觉得这篇文章有帮助，请点赞收藏支持！</p>]]>
                    </description>
                    <pubDate>Mon, 21 Apr 2025 05:47:15 EDT</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[Redis入门：总结与展望]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2947</link>
                    <description>
                            <![CDATA[<h1 id="redis%E6%80%BB%E7%BB%93%E4%B8%8E%E5%B1%95%E6%9C%9B%EF%BC%9A%E4%BB%8E%E5%85%A5%E9%97%A8%E5%88%B0%E7%94%9F%E4%BA%A7%E5%AE%9E%E8%B7%B5%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8C%87%E5%8D%97" tabindex="-1">Redis总结与展望：从入门到生产实践的完整指南</h1><blockquote><p>经过前面五篇深入的学习，我们已经完成了从Redis小白到生产级应用开发者的蜕变。在这最后一篇中，让我们回顾整个学习旅程，总结关键知识点，并展望Redis未来的发展方向和生态体系。</p></blockquote><h2 id="%E4%B8%80%E3%80%81%E7%B3%BB%E5%88%97%E5%9B%9E%E9%A1%BE%EF%BC%9A%E6%88%91%E4%BB%AC%E7%9A%84redis%E5%AD%A6%E4%B9%A0%E4%B9%8B%E6%97%85" tabindex="-1">一、系列回顾：我们的Redis学习之旅</h2><p>让我们简要回顾这个系列涵盖的核心内容：</p><h3 id="%E7%AC%AC%E4%B8%80%E7%AF%87%EF%BC%9Aredis%E6%A0%B8%E5%BF%83%E6%A6%82%E5%BF%B5%E4%B8%8E%E5%BF%AB%E9%80%9F%E5%85%A5%E9%97%A8" tabindex="-1"><strong>第一篇：Redis核心概念与快速入门</strong></h3><ul><li>理解了Redis为什么快（内存存储、单线程模型、I/O多路复用）</li><li>学会了使用Docker快速搭建Redis环境</li><li>掌握了基本的键值操作和通用命令</li><li>认识了Redis的典型应用场景</li></ul><h3 id="%E7%AC%AC%E4%BA%8C%E7%AF%87%EF%BC%9A%E7%8E%A9%E8%BD%ACredis%E4%BA%94%E5%A4%A7%E6%A0%B8%E5%BF%83%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84" tabindex="-1"><strong>第二篇：玩转Redis五大核心数据结构</strong></h3><ul><li><strong>String</strong>：不仅仅是文本，支持计数器、位图等高级用法</li><li><strong>Hash</strong>：存储对象的最佳选择，内存效率高</li><li><strong>List</strong>：实现消息队列和最新列表的利器</li><li><strong>Set</strong>：无序唯一集合，强大的集合运算能力</li><li><strong>Sorted Set</strong>：有序集合，排行榜和时间轴的核心</li></ul><h3 id="%E7%AC%AC%E4%B8%89%E7%AF%87%EF%BC%9Aredis%E7%9A%84%E6%8C%81%E4%B9%85%E5%8C%96%E4%B8%8E%E9%AB%98%E5%8F%AF%E7%94%A8" tabindex="-1"><strong>第三篇：Redis的持久化与高可用</strong></h3><ul><li><strong>RDB</strong>：快照式持久化，适合备份和快速恢复</li><li><strong>AOF</strong>：日志式持久化，保证数据安全</li><li><strong>主从复制</strong>：数据冗余和读写分离的基础</li><li><strong>哨兵模式</strong>：实现自动故障转移的高可用方案</li></ul><h3 id="%E7%AC%AC%E5%9B%9B%E7%AF%87%EF%BC%9Aredis%E5%9C%A8asp.net-core%E9%A1%B9%E7%9B%AE%E4%B8%AD%E7%9A%84%E5%AE%9E%E6%88%98%E5%BA%94%E7%94%A8" tabindex="-1"><strong>第四篇：<a href="http://xn--RedisAsp-v86n.Net" target="_blank">Redis在Asp.Net</a> Core项目中的实战应用</strong></h3><ul><li>集成StackExchange.Redis客户端</li><li>实现商品信息缓存和缓存策略</li><li>解决缓存穿透、击穿、雪崩问题</li><li>使用分布式锁控制并发访问</li><li>配置分布式Session和消息队列</li></ul><h3 id="%E7%AC%AC%E4%BA%94%E7%AF%87%EF%BC%9A%E8%BF%9B%E9%98%B6%E7%9F%A5%E8%AF%86%E4%B8%8E%E8%BF%90%E7%BB%B4%E7%AE%A1%E7%90%86" tabindex="-1"><strong>第五篇：进阶知识与运维管理</strong></h3><ul><li>内存优化和淘汰策略配置</li><li>Redis Cluster集群搭建和管理</li><li>性能监控、慢查询分析和调优</li><li>备份恢复、安全配置和故障诊断</li></ul><h2 id="%E4%BA%8C%E3%80%81redis%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5%E6%80%BB%E7%BB%93" tabindex="-1">二、Redis最佳实践总结</h2><h3 id="1.-%E9%94%AE%E5%90%8D%E8%AE%BE%E8%AE%A1%E8%A7%84%E8%8C%83" tabindex="-1">1. 键名设计规范</h3><pre><code class="language-csharp">// 好的键名设计&quot;user:1001:profile&quot;          // 用户信息&quot;product:2024:hotlist&quot;       // 商品热榜&quot;order:20240101:123456&quot;      // 订单信息&quot;session:abc123def456&quot;       // 会话数据// 避免的键名设计&quot;user_info_1001&quot;             // 不一致的分隔符&quot;data&quot;                       // 过于简单，容易冲突&quot;very_long_key_name_that_is_hard_to_read_and_remember&quot; // 过长</code></pre><p><strong>键名设计原则：</strong></p><ul><li>使用统一的命名空间和分隔符（推荐冒号）</li><li>保持简洁但具有描述性</li><li>避免特殊字符和过长的键名</li></ul><h3 id="2.-%E9%81%BF%E5%85%8D%E5%A4%A7key%E5%92%8C%E7%83%ADkey" tabindex="-1">2. 避免大Key和热Key</h3><p><strong>大Key问题解决方案：</strong></p><pre><code class="language-csharp">// 拆分大Hashpublic async Task SetLargeUserDataAsync(int userId, UserLargeData data){    // 拆分为多个Hash    await _database.HashSetAsync($&quot;user:{userId}:basic&quot;, new[] {        new HashEntry(&quot;name&quot;, data.Name),        new HashEntry(&quot;email&quot;, data.Email)    });        await _database.HashSetAsync($&quot;user:{userId}:profile&quot;, new[] {        new HashEntry(&quot;bio&quot;, data.Bio),        new HashEntry(&quot;avatar&quot;, data.AvatarUrl)    });}// 使用SCAN替代KEYSpublic async Task&lt;List&lt;string&gt;&gt; ScanKeysAsync(string pattern, int pageSize = 1000){    var keys = new List&lt;string&gt;();    var cursor = 0;        do    {        var result = await _database.ExecuteAsync(&quot;SCAN&quot;, cursor.ToString(), &quot;MATCH&quot;, pattern, &quot;COUNT&quot;, pageSize.ToString());        var innerResult = (RedisResult[])result;                cursor = int.Parse((string)innerResult[0]);        var pageKeys = (string[])innerResult[1];        keys.AddRange(pageKeys);            } while (cursor != 0);        return keys;}</code></pre><p><strong>热Key解决方案：</strong></p><pre><code class="language-csharp">// 本地缓存 + Redis多级缓存public class MultiLevelCacheService{    private readonly IMemoryCache _memoryCache;    private readonly IRedisService _redisService;    private readonly TimeSpan _localCacheDuration = TimeSpan.FromMinutes(1);        public async Task&lt;T&gt; GetWithLocalCacheAsync&lt;T&gt;(string key)    {        // 先查本地缓存        if (_memoryCache.TryGetValue(key, out T localValue))            return localValue;                    // 本地缓存未命中，查询Redis        var redisValue = await _redisService.GetAsync&lt;T&gt;(key);        if (redisValue != null)        {            // 写入本地缓存            _memoryCache.Set(key, redisValue, _localCacheDuration);        }                return redisValue;    }}</code></pre><h3 id="3.-%E8%BF%9E%E6%8E%A5%E6%B1%A0%E4%B8%8E%E8%B5%84%E6%BA%90%E7%AE%A1%E7%90%86" tabindex="-1">3. 连接池与资源管理</h3><pre><code class="language-csharp">public static class RedisConnectionManager{    private static Lazy&lt;ConnectionMultiplexer&gt; _lazyConnection;        static RedisConnectionManager()    {        _lazyConnection = new Lazy&lt;ConnectionMultiplexer&gt;(() =&gt;        {            var configuration = new ConfigurationOptions            {                EndPoints = { &quot;localhost:6379&quot; },                AbortOnConnectFail = false,                ConnectRetry = 3,                ConnectTimeout = 5000,                KeepAlive = 180,                SyncTimeout = 5000,                // 连接池配置                AllowAdmin = false,                ClientName = $&quot;{Environment.MachineName}:{Guid.NewGuid()}&quot;            };                        return ConnectionMultiplexer.Connect(configuration);        });    }        public static ConnectionMultiplexer Connection =&gt; _lazyConnection.Value;        public static IDatabase GetDatabase()    {        return Connection.GetDatabase();    }}</code></pre><h3 id="4.-%E7%9B%91%E6%8E%A7%E4%B8%8E%E5%91%8A%E8%AD%A6%E9%85%8D%E7%BD%AE" tabindex="-1">4. 监控与告警配置</h3><p><a href="http://xn--Asp-lp6e.Net" target="_blank">在Asp.Net</a> Core中实现完整的监控：</p><pre><code class="language-csharp">public class RedisMetricsCollector : BackgroundService{    private readonly IConnectionMultiplexer _redis;    private readonly ILogger&lt;RedisMetricsCollector&gt; _logger;    private readonly IMetricsPublisher _metricsPublisher;        protected override async Task ExecuteAsync(CancellationToken stoppingToken)    {        while (!stoppingToken.IsCancellationRequested)        {            try            {                var server = _redis.GetServer(_redis.GetEndPoints().First());                var info = await server.InfoAsync(&quot;all&quot;);                                // 收集关键指标                var metrics = new RedisMetrics                {                    Timestamp = DateTime.UtcNow,                    ConnectedClients = long.Parse(info.First(x =&gt; x.Key == &quot;Clients&quot;)                        .First(x =&gt; x.Key == &quot;connected_clients&quot;).Value),                    UsedMemory = long.Parse(info.First(x =&gt; x.Key == &quot;Memory&quot;)                        .First(x =&gt; x.Key == &quot;used_memory&quot;).Value),                    OpsPerSecond = long.Parse(info.First(x =&gt; x.Key == &quot;Stats&quot;)                        .First(x =&gt; x.Key == &quot;instantaneous_ops_per_sec&quot;).Value),                    HitRate = CalculateHitRate(info),                    NetworkInput = long.Parse(info.First(x =&gt; x.Key == &quot;Stats&quot;)                        .First(x =&gt; x.Key == &quot;total_net_input_bytes&quot;).Value),                    NetworkOutput = long.Parse(info.First(x =&gt; x.Key == &quot;Stats&quot;)                        .First(x =&gt; x.Key == &quot;total_net_output_bytes&quot;).Value)                };                                await _metricsPublisher.PublishAsync(metrics);                                // 检查告警条件                await CheckAlerts(metrics);            }            catch (Exception ex)            {                _logger.LogError(ex, &quot;收集Redis指标时发生错误&quot;);            }                        await Task.Delay(TimeSpan.FromSeconds(30), stoppingToken);        }    }        private double CalculateHitRate(ILookup&lt;string, KeyValuePair&lt;string, string&gt;&gt; info)    {        var hits = long.Parse(info.First(x =&gt; x.Key == &quot;Stats&quot;)            .First(x =&gt; x.Key == &quot;keyspace_hits&quot;).Value);        var misses = long.Parse(info.First(x =&gt; x.Key == &quot;Stats&quot;)            .First(x =&gt; x.Key == &quot;keyspace_misses&quot;).Value);                    return hits + misses == 0 ? 0 : (double)hits / (hits + misses);    }        private async Task CheckAlerts(RedisMetrics metrics)    {        // 内存使用率超过80%        if (metrics.UsedMemory &gt; 0.8 * 1024 * 1024 * 1024) // 假设1GB内存        {            _logger.LogWarning(&quot;Redis内存使用率过高: {UsedMemory} bytes&quot;, metrics.UsedMemory);        }                // 命中率低于90%        if (metrics.HitRate &lt; 0.9)        {            _logger.LogWarning(&quot;Redis缓存命中率过低: {HitRate:P2}&quot;, metrics.HitRate);        }    }}</code></pre><h2 id="%E4%B8%89%E3%80%81redis%E7%94%9F%E6%80%81%E4%B8%8E%E5%B7%A5%E5%85%B7%E4%BB%8B%E7%BB%8D" tabindex="-1">三、Redis生态与工具介绍</h2><h3 id="1.-%E5%B8%B8%E7%94%A8redis%E5%8F%AF%E8%A7%86%E5%8C%96%E5%B7%A5%E5%85%B7" tabindex="-1">1. 常用Redis可视化工具</h3><p><strong>RedisInsight（官方推荐）</strong></p><ul><li>功能全面的GUI工具</li><li>支持数据浏览、CLI操作、性能监控</li><li>免费使用，跨平台支持</li></ul><p><strong>Another Redis Desktop Manager</strong></p><ul><li>开源免费的桌面管理器</li><li>直观的界面，支持多种数据类型展示</li><li>跨平台支持</li></ul><p><strong>Redis Commander</strong></p><ul><li>基于Web的管理界面</li><li>适合部署在服务器环境</li><li>轻量级，功能齐全</li></ul><h3 id="2.-%E7%9B%91%E6%8E%A7%E4%B8%8E%E8%BF%90%E7%BB%B4%E5%B9%B3%E5%8F%B0" tabindex="-1">2. 监控与运维平台</h3><p><strong>Prometheus + Grafana</strong></p><pre><code class="language-yaml"># Redis Exporter配置scrape_configs:  - job_name: &#39;redis&#39;    static_configs:      - targets: [&#39;redis-exporter:9121&#39;]    metrics_path: /scrape    params:      target: [&#39;redis-server:6379&#39;]</code></pre><p><strong>DataDog / New Relic</strong></p><ul><li>商业APM工具</li><li>提供深度性能分析和告警</li><li>企业级功能支持</li></ul><h3 id="3.-redis%E6%A8%A1%E5%9D%97%E7%B3%BB%E7%BB%9F%E7%AE%80%E4%BB%8B" tabindex="-1">3. Redis模块系统简介</h3><p>Redis 4.0引入了模块系统，允许开发者扩展Redis功能：</p><p><strong>RedisJSON</strong></p><ul><li>原生支持JSON文档</li><li>提供JSONPath查询语法</li></ul><pre><code class="language-bash"># 存储和查询JSON文档127.0.0.1:6379&gt; JSON.SET user:1001 $ &#39;{&quot;name&quot;:&quot;Alice&quot;,&quot;age&quot;:30}&#39;127.0.0.1:6379&gt; JSON.GET user:1001 $.name</code></pre><p><strong>RedisSearch</strong></p><ul><li>全文搜索功能</li><li>二级索引支持</li></ul><pre><code class="language-bash"># 创建全文搜索索引127.0.0.1:6379&gt; FT.CREATE productIdx ON HASH PREFIX 1 product: SCHEMA name TEXT WEIGHT 5.0 description TEXT</code></pre><p><strong>RedisBloom</strong></p><ul><li>概率数据结构</li><li>布隆过滤器、基数估算等</li></ul><pre><code class="language-bash"># 使用布隆过滤器127.0.0.1:6379&gt; BF.ADD visited:users user123127.0.0.1:6379&gt; BF.EXISTS visited:users user123</code></pre><p><strong>RedisTimeSeries</strong></p><ul><li>时间序列数据处理</li><li>支持聚合和降采样</li></ul><pre><code class="language-bash"># 存储时间序列数据127.0.0.1:6379&gt; TS.ADD temperature:room1 1620000000 25.5127.0.0.1:6379&gt; TS.RANGE temperature:room1 1620000000 1620003600</code></pre><h2 id="%E5%9B%9B%E3%80%81redis%E6%9C%AA%E6%9D%A5%E5%8F%91%E5%B1%95%E8%B6%8B%E5%8A%BF" tabindex="-1">四、Redis未来发展趋势</h2><h3 id="1.-redis-7.0%2B-%E6%96%B0%E7%89%B9%E6%80%A7" tabindex="-1">1. Redis 7.0+ 新特性</h3><p><strong>Functions（替代Lua脚本）</strong></p><pre><code class="language-lua">#!lua name=mylibredis.register_function(&#39;my_hset&#39;, function(keys, args)    return redis.call(&#39;HSET&#39;, keys[1], args[1], args[2])end)</code></pre><p><strong>ACL增强</strong></p><ul><li>更细粒度的权限控制</li><li>键模式权限管理</li><li>用户角色管理</li></ul><p><strong>性能优化</strong></p><ul><li>多线程I/O（非数据操作）</li><li>更高效的内存管理</li><li>改进的集群性能</li></ul><h3 id="2.-%E4%BA%91%E5%8E%9F%E7%94%9F%E4%B8%8Ekubernetes%E9%9B%86%E6%88%90" tabindex="-1">2. 云原生与Kubernetes集成</h3><p><strong>Redis Operator</strong></p><ul><li>自动化Redis集群部署</li><li>故障自愈和弹性伸缩</li><li>备份和恢复管理</li></ul><p><strong>服务网格集成</strong></p><ul><li>与Istio、Linkerd的深度集成</li><li>智能流量路由和负载均衡</li><li>可观测性增强</li></ul><h3 id="3.-ai%E4%B8%8E%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E9%9B%86%E6%88%90" tabindex="-1">3. AI与机器学习集成</h3><p><strong>向量搜索</strong></p><pre><code class="language-bash"># 使用Redis作为向量数据库127.0.0.1:6379&gt; FT.CREATE vec_idx ON HASH PREFIX 1 vec: SCHEMA vector VECTOR127.0.0.1:6379&gt; HSET vec:1 vector &quot;0.1,0.2,0.3&quot;</code></pre><p><strong>实时特征存储</strong></p><ul><li>机器学习特征工程</li><li>在线推理数据准备</li><li>实时推荐系统</li></ul><h2 id="%E4%BA%94%E3%80%81redis%E7%9A%84%E5%B1%80%E9%99%90%E6%80%A7%E5%8F%8A%E6%9B%BF%E4%BB%A3%E6%96%B9%E6%A1%88" tabindex="-1">五、Redis的局限性及替代方案</h2><p>虽然Redis功能强大，但也有其局限性：</p><h3 id="%E4%B8%8D%E9%80%82%E5%90%88%E4%BD%BF%E7%94%A8redis%E7%9A%84%E5%9C%BA%E6%99%AF" tabindex="-1">不适合使用Redis的场景</h3><p><strong>大量数据存储</strong></p><ul><li>Redis主要依赖内存，成本较高</li><li>替代方案：Cassandra、HBase</li></ul><p><strong>复杂查询和分析</strong></p><ul><li>Redis查询能力相对有限</li><li>替代方案：Elasticsearch、ClickHouse</li></ul><p><strong>强一致性事务</strong></p><ul><li>Redis事务非ACID兼容</li><li>替代方案：关系型数据库</li></ul><h3 id="%E6%96%B0%E5%85%B4%E7%AB%9E%E5%93%81%E5%88%86%E6%9E%90" tabindex="-1">新兴竞品分析</h3><p><strong>KeyDB</strong></p><ul><li>Redis的多线程版本</li><li>更好的多核CPU利用率</li><li>完全兼容Redis协议</li></ul><p><strong>Dragonfly</strong></p><ul><li>新型高性能内存数据库</li><li>声称比Redis快25倍</li><li>创新的数据结构设计</li></ul><p><strong>AWS ElastiCache for Redis</strong></p><ul><li>托管Redis服务</li><li>自动备份、故障转移</li><li>企业级功能支持</li></ul><h2 id="%E5%85%AD%E3%80%81%E7%BB%93%E8%AF%AD%EF%BC%9A%E6%8C%81%E7%BB%AD%E5%AD%A6%E4%B9%A0%E7%9A%84%E5%BB%BA%E8%AE%AE" tabindex="-1">六、结语：持续学习的建议</h2><p>通过这个系列的学习，你已经建立了坚实的Redis知识体系。但技术的道路永无止境，以下是一些持续学习的建议：</p><h3 id="1.-%E5%AE%9E%E8%B7%B5%E6%98%AF%E6%9C%80%E5%A5%BD%E7%9A%84%E8%80%81%E5%B8%88" tabindex="-1">1. 实践是最好的老师</h3><ul><li>在自己的项目中积极应用Redis</li><li>尝试解决真实世界的性能问题</li><li>参与开源项目，阅读优秀的Redis使用案例</li></ul><h3 id="2.-%E5%85%B3%E6%B3%A8%E7%A4%BE%E5%8C%BA%E5%8A%A8%E6%80%81" tabindex="-1">2. 关注社区动态</h3><ul><li>关注Redis官方博客和GitHub仓库</li><li>参与Redis Conf等技术大会</li><li>加入相关的技术社区和论坛</li></ul><h3 id="3.-%E6%B7%B1%E5%85%A5%E5%8E%9F%E7%90%86%E7%A0%94%E7%A9%B6" tabindex="-1">3. 深入原理研究</h3><ul><li>阅读《Redis设计与实现》</li><li>分析Redis源码，理解内部机制</li><li>尝试自己实现简单的内存数据库</li></ul><h3 id="4.-%E6%8B%93%E5%B1%95%E6%8A%80%E6%9C%AF%E8%A7%86%E9%87%8E" tabindex="-1">4. 拓展技术视野</h3><ul><li>学习其他类型的数据库（关系型、文档型、图数据库等）</li><li>了解分布式系统理论</li><li>掌握云原生技术栈</li></ul><h2 id="%E6%9C%80%E5%90%8E%E7%9A%84%E6%80%9D%E8%80%83" tabindex="-1">最后的思考</h2><p>Redis不仅仅是一个缓存工具，它已经发展成为现代应用架构中的<strong>多功能数据平台</strong>。从简单的键值存储到复杂的数据结构服务，从单机部署到全球分布式集群，Redis一直在演进。</p><p><strong>记住这个核心理念：</strong></p><blockquote><p>“选择合适的工具解决正确的问题，并深入理解你所使用的工具。”</p></blockquote><p>希望这个Redis系列教程能够成为你技术成长道路上有价值的参考资料。无论你是初学者还是经验丰富的开发者，对Redis的深入理解都将为你的职业生涯带来显著的提升。</p><p>感谢你坚持学完这个系列！如果在学习过程中有任何疑问或心得，欢迎在评论区分享交流。技术的道路需要同行者，让我们共同进步！</p><hr /><p><em>“学无止境，实践出真知。愿你在技术的道路上越走越远，不断突破自我！”</em></p><hr /><p>这个完整的Redis系列教程到这里就全部结束了。从基础概念到生产实践，从简单使用到深度优化，希望这个系列能够成为你在Redis学习道路上的得力助手。祝你编程愉快，技术精进！</p>]]>
                    </description>
                    <pubDate>Fri, 18 Apr 2025 06:53:56 EDT</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[Redis入门：进阶知识与运维管理]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2946</link>
                    <description>
                            <![CDATA[<h1 id="redis%E8%BF%9B%E9%98%B6%E7%9F%A5%E8%AF%86%E4%B8%8E%E8%BF%90%E7%BB%B4%E7%AE%A1%E7%90%86%EF%BC%9A%E6%9E%84%E5%BB%BA%E7%94%9F%E4%BA%A7%E7%BA%A7%E5%BA%94%E7%94%A8" tabindex="-1">Redis进阶知识与运维管理：构建生产级应用</h1><blockquote><p>在前几篇中，我们已经掌握了Redis的核心概念和基本应用。但当Redis真正走向生产环境时，我们需要面对更复杂的挑战：如何优化内存使用？如何保证集群的高可用？如何监控和调优性能？今天，我们将深入Redis的进阶主题，帮助你构建真正稳定、高效的Redis应用。</p></blockquote><h2 id="%E4%B8%80%E3%80%81redis%E5%86%85%E5%AD%98%E4%BC%98%E5%8C%96%E4%B8%8E%E6%B7%98%E6%B1%B0%E7%AD%96%E7%95%A5" tabindex="-1">一、Redis内存优化与淘汰策略</h2><h3 id="1.-redis%E5%86%85%E5%AD%98%E6%B6%88%E8%80%97%E6%B7%B1%E5%BA%A6%E5%88%86%E6%9E%90" tabindex="-1">1. Redis内存消耗深度分析</h3><p>在生产环境中，理解Redis内存使用情况至关重要。让我们从几个关键指标开始：</p><pre><code class="language-bash"># 查看详细内存信息127.0.0.1:6379&gt; INFO memory# Memoryused_memory:1024000used_memory_human:1000.00Kused_memory_rss:2048000used_memory_peak:1048576used_memory_peak_human:1.00Mused_memory_lua:37888mem_fragmentation_ratio:2.00mem_allocator:jemalloc-5.1.0</code></pre><p><strong>关键指标解读：</strong></p><ul><li><strong>used_memory</strong>：Redis分配器分配的内存总量（字节）</li><li><strong>used_memory_rss</strong>：从操作系统角度显示Redis进程占用的物理内存</li><li><strong>mem_fragmentation_ratio</strong>：内存碎片率 = used_memory_rss / used_memory<ul><li>1.0 - 1.5：良好状态</li><li>1.5 - 2.0：需要关注</li><li>2.0：严重碎片，考虑重启</li></ul></li></ul><h3 id="2.-%E5%86%85%E5%AD%98%E4%BC%98%E5%8C%96%E5%AE%9E%E6%88%98%E6%8A%80%E5%B7%A7" tabindex="-1">2. 内存优化实战技巧</h3><h4 id="a)-%E7%BC%A9%E7%9F%AD%E9%94%AE%E5%80%BC%E5%AF%B9%E9%95%BF%E5%BA%A6" tabindex="-1">a) 缩短键值对长度</h4><pre><code class="language-csharp">// 不推荐 - 键名过长await _database.StringSetAsync(&quot;user:session:1001:shopping:cart:items&quot;, cartData);// 推荐 - 精简键名await _database.StringSetAsync(&quot;u:1001:cart&quot;, cartData);// 对于值，考虑使用压缩public async Task SetCompressedAsync(string key, string value, TimeSpan? expiry = null){    var compressedBytes = CompressString(value);    await _database.StringSetAsync(key, compressedBytes, expiry);}public async Task&lt;string&gt; GetCompressedAsync(string key){    var bytes = (byte[]?)await _database.StringGetAsync(key);    return bytes != null ? DecompressString(bytes) : null;}</code></pre><h4 id="b)-%E4%BD%BF%E7%94%A8%E9%80%82%E5%BD%93%E7%9A%84%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84%E7%BC%96%E7%A0%81" tabindex="-1">b) 使用适当的数据结构编码</h4><p>Redis会自动为小规模数据选择更高效的编码方式：</p><pre><code class="language-bash"># 查看Key的编码方式127.0.0.1:6379&gt; OBJECT ENCODING user:1001&quot;hashtable&quot;127.0.0.1:6379&gt; OBJECT ENCODING small:hash&quot;ziplist&quot;</code></pre><p>优化配置（在redis.conf中）：</p><pre><code class="language-bash"># Hash配置 - 当字段数≤512且所有值≤64字节时使用ziplisthash-max-ziplist-entries 512hash-max-ziplist-value 64# List配置list-max-ziplist-size -2# Set配置 - 当元素都是整数且数量≤512时使用intsetset-max-intset-entries 512# Sorted Set配置zset-max-ziplist-entries 128zset-max-ziplist-value 64</code></pre><h4 id="c)-%E4%BD%BF%E7%94%A8%E4%BD%8D%E5%9B%BE%E5%92%8Chyperloglog" tabindex="-1">c) 使用位图和HyperLogLog</h4><p>对于特定场景，使用特殊数据结构可以大幅节省内存：</p><pre><code class="language-csharp">// 位图 - 用户签到系统public class SignInService{    private readonly IDatabase _database;        public async Task SignInAsync(int userId, DateTime date)    {        var key = $&quot;signin:{userId}:{date:yyyyMM}&quot;;        var offset = date.Day - 1; // 0-30                await _database.StringSetBitAsync(key, offset, true);    }        public async Task&lt;int&gt; GetSignInCountAsync(int userId, int year, int month)    {        var key = $&quot;signin:{userId}:{year:0000}{month:00}&quot;;        // 使用BITCOUNT统计签到天数        return (int)await _database.StringBitCountAsync(key);    }}// HyperLogLog - 统计UV（独立访客）public class VisitorService{    public async Task AddVisitorAsync(string pageId, string visitorId)    {        var key = $&quot;uv:{pageId}&quot;;        await _database.HyperLogLogAddAsync(key, visitorId);    }        public async Task&lt;long&gt; GetVisitorCountAsync(string pageId)    {        var key = $&quot;uv:{pageId}&quot;;        return await _database.HyperLogLogLengthAsync(key);    }}</code></pre><h3 id="3.-%E5%86%85%E5%AD%98%E6%B7%98%E6%B1%B0%E7%AD%96%E7%95%A5%E8%AF%A6%E8%A7%A3" tabindex="-1">3. 内存淘汰策略详解</h3><p>当内存达到上限时，Redis提供了8种淘汰策略：</p><pre><code class="language-bash"># redis.conf配置maxmemory 1gbmaxmemory-policy allkeys-lru</code></pre><p><strong>淘汰策略对比：</strong></p><table><thead><tr><th>策略</th><th>作用范围</th><th>淘汰机制</th><th>适用场景</th></tr></thead><tbody><tr><td><strong>noeviction</strong></td><td>-</td><td>不淘汰，返回错误</td><td>数据绝对不能丢失</td></tr><tr><td><strong>allkeys-lru</strong></td><td>所有Key</td><td>最近最少使用</td><td>通用场景</td></tr><tr><td><strong>volatile-lru</strong></td><td>过期Key</td><td>最近最少使用</td><td>部分数据可丢失</td></tr><tr><td><strong>allkeys-random</strong></td><td>所有Key</td><td>随机淘汰</td><td>访问模式随机</td></tr><tr><td><strong>volatile-random</strong></td><td>过期Key</td><td>随机淘汰</td><td>部分数据可丢失</td></tr><tr><td><strong>volatile-ttl</strong></td><td>过期Key</td><td>剩余时间最短</td><td>需要优先淘汰旧数据</td></tr><tr><td><strong>allkeys-lfu</strong></td><td>所有Key</td><td>最不经常使用</td><td>访问频率差异大</td></tr><tr><td><strong>volatile-lfu</strong></td><td>过期Key</td><td>最不经常使用</td><td>部分数据可丢失</td></tr></tbody></table><p><strong>生产环境推荐：</strong></p><pre><code class="language-bash"># 对于缓存场景maxmemory-policy allkeys-lru# 对于混合使用（缓存+持久化数据）maxmemory-policy volatile-lru</code></pre><h2 id="%E4%BA%8C%E3%80%81redis%E9%9B%86%E7%BE%A4%EF%BC%88cluster%EF%BC%89%E6%A8%A1%E5%BC%8F%EF%BC%9A%E8%B5%B0%E5%90%91%E5%88%86%E5%B8%83%E5%BC%8F" tabindex="-1">二、Redis集群（Cluster）模式：走向分布式</h2><h3 id="1.-%E4%B8%BA%E4%BB%80%E4%B9%88%E9%9C%80%E8%A6%81cluster%EF%BC%9F" tabindex="-1">1. 为什么需要Cluster？</h3><p>当面临以下场景时，单机Redis无法满足需求：</p><ul><li>数据量超过单机内存容量</li><li>写并发超过单机处理能力</li><li>需要更高的可用性保障</li></ul><h3 id="2.-hash-slot%EF%BC%88%E5%93%88%E5%B8%8C%E6%A7%BD%EF%BC%89%E5%88%86%E7%89%87%E5%8E%9F%E7%90%86" tabindex="-1">2. Hash Slot（哈希槽）分片原理</h3><p>Redis Cluster采用虚拟槽分区，共有16384个槽：</p><ul><li>每个Key通过CRC16哈希后对16384取模，得到对应的槽</li><li>每个节点负责一部分槽的范围</li><li>支持动态重新分片</li></ul><pre><code class="language-bash"># 计算Key的槽位置127.0.0.1:6379&gt; CLUSTER KEYSLOT &quot;user:1001&quot;(integer) 14982</code></pre><h3 id="3.-%E6%90%AD%E5%BB%BA6%E8%8A%82%E7%82%B9redis-cluster" tabindex="-1">3. 搭建6节点Redis Cluster</h3><h4 id="%E9%9B%86%E7%BE%A4%E8%A7%84%E5%88%92%EF%BC%9A" tabindex="-1">集群规划：</h4><ul><li>3个主节点：7000, 7001, 7002</li><li>3个从节点：7003, 7004, 7005</li></ul><h4 id="%E5%88%9B%E5%BB%BA%E8%8A%82%E7%82%B9%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%EF%BC%9A" tabindex="-1">创建节点配置文件：</h4><p><strong>redis-7000.conf:</strong></p><pre><code class="language-bash">port 7000cluster-enabled yescluster-config-file nodes-7000.confcluster-node-timeout 15000appendonly yesappendfilename &quot;appendonly-7000.aof&quot;dbfilename dump-7000.rdblogfile &quot;redis-7000.log&quot;</code></pre><p>重复创建7001-7005的配置文件。</p><h4 id="%E5%90%AF%E5%8A%A8%E6%89%80%E6%9C%89%E8%8A%82%E7%82%B9%EF%BC%9A" tabindex="-1">启动所有节点：</h4><pre><code class="language-bash">redis-server redis-7000.confredis-server redis-7001.conf# ... 启动所有6个节点</code></pre><h4 id="%E5%88%9B%E5%BB%BA%E9%9B%86%E7%BE%A4%EF%BC%9A" tabindex="-1">创建集群：</h4><pre><code class="language-bash">redis-cli --cluster create \  127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 \  127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \  --cluster-replicas 1</code></pre><h3 id="4.-%E9%9B%86%E7%BE%A4%E7%AE%A1%E7%90%86%E4%B8%8E%E8%BF%90%E7%BB%B4" tabindex="-1">4. 集群管理与运维</h3><h4 id="%E6%9F%A5%E7%9C%8B%E9%9B%86%E7%BE%A4%E7%8A%B6%E6%80%81%EF%BC%9A" tabindex="-1">查看集群状态：</h4><pre><code class="language-bash"># 查看集群节点信息redis-cli -p 7000 cluster nodes# 查看集群信息redis-cli -p 7000 cluster info# 查看槽分配情况redis-cli -p 7000 cluster slots</code></pre><h4 id="%E8%8A%82%E7%82%B9%E7%AE%A1%E7%90%86%EF%BC%9A" tabindex="-1">节点管理：</h4><pre><code class="language-bash"># 添加新主节点redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000# 添加新从节点redis-cli --cluster add-node 127.0.0.1:7007 127.0.0.1:7000 --cluster-slave --cluster-master-id &lt;master-node-id&gt;# 重新分片redis-cli --cluster reshard 127.0.0.1:7000# 修复节点redis-cli --cluster fix 127.0.0.1:7000</code></pre><h3 id="5.-%E5%9C%A8asp.net-core%E4%B8%AD%E8%BF%9E%E6%8E%A5%E9%9B%86%E7%BE%A4" tabindex="-1">5. <a href="http://xn--Asp-lp6e.Net" target="_blank">在Asp.Net</a> Core中连接集群</h3><pre><code class="language-csharp">public static class RedisClusterServiceExtensions{    public static IServiceCollection AddRedisCluster(this IServiceCollection services, IConfiguration configuration)    {        var redisOptions = new ConfigurationOptions        {            EndPoints =             {                { &quot;127.0.0.1&quot;, 7000 },                { &quot;127.0.0.1&quot;, 7001 },                { &quot;127.0.0.1&quot;, 7002 },                { &quot;127.0.0.1&quot;, 7003 },                { &quot;127.0.0.1&quot;, 7004 },                { &quot;127.0.0.1&quot;, 7005 }            },            Password = configuration[&quot;Redis:Password&quot;],            AbortOnConnectFail = false,            ConnectRetry = 3,            ConnectTimeout = 5000,            SyncTimeout = 5000        };        services.AddSingleton&lt;IConnectionMultiplexer&gt;(sp =&gt;             ConnectionMultiplexer.Connect(redisOptions)        );        return services;    }}</code></pre><h2 id="%E4%B8%89%E3%80%81%E6%80%A7%E8%83%BD%E8%B0%83%E4%BC%98%E4%B8%8E%E7%9B%91%E6%8E%A7" tabindex="-1">三、性能调优与监控</h2><h3 id="1.-%E4%BD%BF%E7%94%A8info%E5%91%BD%E4%BB%A4%E6%B7%B1%E5%BA%A6%E7%9B%91%E6%8E%A7" tabindex="-1">1. 使用INFO命令深度监控</h3><pre><code class="language-csharp">public class RedisMonitorService{    private readonly IConnectionMultiplexer _redis;        public async Task&lt;RedisMetrics&gt; GetMetricsAsync()    {        var database = _redis.GetDatabase();        var server = _redis.GetServer(_redis.GetEndPoints().First());                var info = await server.InfoAsync(&quot;all&quot;);                return new RedisMetrics        {            ConnectedClients = info.First(x =&gt; x.Key == &quot;Clients&quot;).First(x =&gt; x.Key == &quot;connected_clients&quot;).Value,            UsedMemory = info.First(x =&gt; x.Key == &quot;Memory&quot;).First(x =&gt; x.Key == &quot;used_memory&quot;).Value,            OpsPerSecond = info.First(x =&gt; x.Key == &quot;Stats&quot;).First(x =&gt; x.Key == &quot;instantaneous_ops_per_sec&quot;).Value,            KeyspaceHits = info.First(x =&gt; x.Key == &quot;Stats&quot;).First(x =&gt; x.Key == &quot;keyspace_hits&quot;).Value,            KeyspaceMisses = info.First(x =&gt; x.Key == &quot;Stats&quot;).First(x =&gt; x.Key == &quot;keyspace_misses&quot;).Value,            NetworkInput = info.First(x =&gt; x.Key == &quot;Stats&quot;).First(x =&gt; x.Key == &quot;total_net_input_bytes&quot;).Value,            NetworkOutput = info.First(x =&gt; x.Key == &quot;Stats&quot;).First(x =&gt; x.Key == &quot;total_net_output_bytes&quot;).Value        };    }        public double CalculateHitRate(RedisMetrics metrics)    {        var hits = long.Parse(metrics.KeyspaceHits);        var misses = long.Parse(metrics.KeyspaceMisses);        return hits + misses == 0 ? 0 : (double)hits / (hits + misses);    }}public record RedisMetrics{    public string ConnectedClients { get; init; }    public string UsedMemory { get; init; }    public string OpsPerSecond { get; init; }    public string KeyspaceHits { get; init; }    public string KeyspaceMisses { get; init; }    public string NetworkInput { get; init; }    public string NetworkOutput { get; init; }}</code></pre><h3 id="2.-slow-log%EF%BC%88%E6%85%A2%E6%9F%A5%E8%AF%A2%E6%97%A5%E5%BF%97%EF%BC%89%E5%88%86%E6%9E%90%E4%B8%8E%E4%BC%98%E5%8C%96" tabindex="-1">2. Slow Log（慢查询日志）分析与优化</h3><h4 id="%E9%85%8D%E7%BD%AE%E6%85%A2%E6%9F%A5%E8%AF%A2%E6%97%A5%E5%BF%97%EF%BC%9A" tabindex="-1">配置慢查询日志：</h4><pre><code class="language-bash"># redis.conf配置slowlog-log-slower-than 10000  # 超过10毫秒的记录slowlog-max-len 1000           # 最多保存1000条慢查询</code></pre><h4 id="%E5%88%86%E6%9E%90%E6%85%A2%E6%9F%A5%E8%AF%A2%EF%BC%9A" tabindex="-1">分析慢查询：</h4><pre><code class="language-bash"># 查看慢查询日志127.0.0.1:6379&gt; SLOWLOG GET 101) 1) (integer) 14               # 日志ID   2) (integer) 1600000000       # 时间戳   3) (integer) 15000            # 执行时间(微秒)   4) 1) &quot;KEYS&quot;                  # 命令      2) &quot;user:*:session&quot;   5) &quot;127.0.0.1:58234&quot;          # 客户端   6) &quot;&quot;                         # 客户端名称</code></pre><h4 id="%E4%BC%98%E5%8C%96%E5%BB%BA%E8%AE%AE%EF%BC%9A" tabindex="-1">优化建议：</h4><ul><li>避免使用<code>KEYS</code>命令，使用<code>SCAN</code>替代</li><li>对大集合的操作进行分片</li><li>使用Pipeline减少网络往返</li></ul><h3 id="3.-pipeline%EF%BC%88%E7%AE%A1%E9%81%93%EF%BC%89%E6%8F%90%E5%8D%87%E6%80%A7%E8%83%BD" tabindex="-1">3. Pipeline（管道）提升性能</h3><pre><code class="language-csharp">public class RedisPipelineService{    private readonly IDatabase _database;        public async Task&lt;List&lt;object&gt;&gt; BatchGetAsync(List&lt;string&gt; keys)    {        var batch = _database.CreateBatch();                var tasks = new List&lt;Task&lt;RedisValue&gt;&gt;();        foreach (var key in keys)        {            tasks.Add(batch.StringGetAsync(key));        }                batch.Execute();        var results = await Task.WhenAll(tasks);                return results.Select(r =&gt; (object)r).ToList();    }        public async Task BatchSetAsync(Dictionary&lt;string, string&gt; keyValues)    {        var batch = _database.CreateBatch();                var tasks = new List&lt;Task&gt;();        foreach (var kv in keyValues)        {            tasks.Add(batch.StringSetAsync(kv.Key, kv.Value));        }                batch.Execute();        await Task.WhenAll(tasks);    }}</code></pre><h3 id="4.-lua%E8%84%9A%E6%9C%AC%E5%AE%9E%E7%8E%B0%E5%A4%8D%E6%9D%82%E5%8E%9F%E5%AD%90%E6%93%8D%E4%BD%9C" tabindex="-1">4. Lua脚本实现复杂原子操作</h3><pre><code class="language-csharp">public class RedisLuaService{    private readonly IDatabase _database;        // 实现分布式限流    public async Task&lt;bool&gt; RateLimitAsync(string key, int maxRequests, TimeSpan window)    {        var luaScript = @&quot;            local key = KEYS[1]            local max_requests = tonumber(ARGV[1])            local window = tonumber(ARGV[2])            local current_time = tonumber(ARGV[3])                        -- 移除时间窗口之外的请求            redis.call(&#39;ZREMRANGEBYSCORE&#39;, key, 0, current_time - window)                        -- 获取当前请求数量            local current_requests = redis.call(&#39;ZCARD&#39;, key)                        if current_requests &gt;= max_requests then                return 0            end                        -- 添加当前请求            redis.call(&#39;ZADD&#39;, key, current_time, current_time)            redis.call(&#39;EXPIRE&#39;, key, window)            return 1        &quot;;                var result = await _database.ScriptEvaluateAsync(            luaScript,             new RedisKey[] { key },             new RedisValue[] { maxRequests, window.TotalSeconds, DateTimeOffset.UtcNow.ToUnixTimeSeconds() }        );                return (int)result == 1;    }        // 实现原子性的库存扣减    public async Task&lt;bool&gt; DeductStockAsync(string stockKey, int quantity)    {        var luaScript = @&quot;            local stock_key = KEYS[1]            local quantity = tonumber(ARGV[1])                        local current_stock = tonumber(redis.call(&#39;GET&#39;, stock_key) or &#39;0&#39;)                        if current_stock &lt; quantity then                return 0            end                        redis.call(&#39;DECRBY&#39;, stock_key, quantity)            return 1        &quot;;                var result = await _database.ScriptEvaluateAsync(            luaScript,            new RedisKey[] { stockKey },            new RedisValue[] { quantity }        );                return (int)result == 1;    }}</code></pre><h2 id="%E5%9B%9B%E3%80%81%E5%A4%87%E4%BB%BD%E4%B8%8E%E6%81%A2%E5%A4%8D%E7%AD%96%E7%95%A5" tabindex="-1">四、备份与恢复策略</h2><h3 id="1.-%E8%87%AA%E5%8A%A8%E5%8C%96%E5%A4%87%E4%BB%BD%E6%96%B9%E6%A1%88" tabindex="-1">1. 自动化备份方案</h3><pre><code class="language-csharp">public class RedisBackupService{    private readonly IConnectionMultiplexer _redis;    private readonly ILogger&lt;RedisBackupService&gt; _logger;        public async Task&lt;bool&gt; CreateRdbBackupAsync(string backupPath)    {        try        {            var server = _redis.GetServer(_redis.GetEndPoints().First());                        // 执行BGSAVE            await server.SaveAsync(SaveType.BackgroundSave);                        _logger.LogInformation(&quot;RDB备份创建成功&quot;);            return true;        }        catch (Exception ex)        {            _logger.LogError(ex, &quot;RDB备份创建失败&quot;);            return false;        }    }        public async Task&lt;string&gt; CreateAofBackupAsync()    {        try        {            var server = _redis.GetServer(_redis.GetEndPoints().First());                        // 执行AOF重写            await server.SaveAsync(SaveType.AppendOnlyFileRewrite);                        _logger.LogInformation(&quot;AOF备份创建成功&quot;);            return &quot;success&quot;;        }        catch (Exception ex)        {            _logger.LogError(ex, &quot;AOF备份创建失败&quot;);            return &quot;failed&quot;;        }    }}</code></pre><h3 id="2.-%E5%A4%87%E4%BB%BD%E9%AA%8C%E8%AF%81%E4%B8%8E%E6%81%A2%E5%A4%8D%E6%B5%8B%E8%AF%95" tabindex="-1">2. 备份验证与恢复测试</h3><p>定期验证备份文件的完整性和可恢复性：</p><pre><code class="language-bash"># 验证RDB文件redis-check-rdb dump.rdb# 验证AOF文件redis-check-aof appendonly.aof# 修复AOF文件redis-check-aof --fix appendonly.aof</code></pre><h2 id="%E4%BA%94%E3%80%81%E5%AE%89%E5%85%A8%E9%85%8D%E7%BD%AE%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5" tabindex="-1">五、安全配置最佳实践</h2><h3 id="1.-%E7%BD%91%E7%BB%9C%E5%AE%89%E5%85%A8%E9%85%8D%E7%BD%AE" tabindex="-1">1. 网络安全配置</h3><pre><code class="language-bash"># redis.conf安全配置# 绑定IP地址bind 127.0.0.1 10.0.0.1# 保护模式protected-mode yes# 认证密码requirepass &quot;YourStrongPassword123!&quot;# 重命名危险命令rename-command FLUSHDB &quot;&quot;rename-command FLUSHALL &quot;&quot;rename-command CONFIG &quot;CONFIG_SECRET&quot;rename-command SHUTDOWN &quot;SHUTDOWN_SECRET&quot;</code></pre><h3 id="2.-%E5%9C%A8asp.net-core%E4%B8%AD%E5%AE%89%E5%85%A8%E8%BF%9E%E6%8E%A5" tabindex="-1">2. <a href="http://xn--Asp-lp6e.Net" target="_blank">在Asp.Net</a> Core中安全连接</h3><pre><code class="language-csharp">public static class SecureRedisConfiguration{    public static IServiceCollection AddSecureRedis(this IServiceCollection services, IConfiguration configuration)    {        var redisConfig = new ConfigurationOptions        {            EndPoints = { configuration[&quot;Redis:Endpoint&quot;] },            Password = configuration[&quot;Redis:Password&quot;],            Ssl = bool.Parse(configuration[&quot;Redis:UseSsl&quot;] ?? &quot;false&quot;),            AbortOnConnectFail = false,            ConnectRetry = 3,            ConnectTimeout = 5000,            SyncTimeout = 5000        };                // 添加客户端名称便于审计        redisConfig.ClientName = $&quot;{Environment.MachineName}:{Guid.NewGuid()}&quot;;                services.AddSingleton&lt;IConnectionMultiplexer&gt;(sp =&gt;             ConnectionMultiplexer.Connect(redisConfig)        );                return services;    }}</code></pre><h2 id="%E5%85%AD%E3%80%81%E6%95%85%E9%9A%9C%E8%AF%8A%E6%96%AD%E4%B8%8E%E9%97%AE%E9%A2%98%E6%8E%92%E6%9F%A5" tabindex="-1">六、故障诊断与问题排查</h2><h3 id="1.-%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98%E8%AF%8A%E6%96%AD%E5%91%BD%E4%BB%A4" tabindex="-1">1. 常见问题诊断命令</h3><pre><code class="language-bash"># 查看客户端连接127.0.0.1:6379&gt; CLIENT LIST# 查看内存详情127.0.0.1:6379&gt; MEMORY STATS# 查看大Key127.0.0.1:6379&gt; MEMORY USAGE keyname# 监控实时命令127.0.0.1:6379&gt; MONITOR# 查看延迟redis-cli --latency -h host -p port</code></pre><h3 id="2.-%E5%9C%A8asp.net-core%E4%B8%AD%E5%AE%9E%E7%8E%B0%E5%81%A5%E5%BA%B7%E6%A3%80%E6%9F%A5" tabindex="-1">2. <a href="http://xn--Asp-lp6e.Net" target="_blank">在Asp.Net</a> Core中实现健康检查</h3><pre><code class="language-csharp">public class RedisHealthCheck : IHealthCheck{    private readonly IConnectionMultiplexer _redis;        public RedisHealthCheck(IConnectionMultiplexer redis)    {        _redis = redis;    }        public async Task&lt;HealthCheckResult&gt; CheckHealthAsync(        HealthCheckContext context,         CancellationToken cancellationToken = default)    {        try        {            if (!_redis.IsConnected)                return HealthCheckResult.Unhealthy(&quot;Redis连接已断开&quot;);                            var database = _redis.GetDatabase();            var pong = await database.PingAsync();                        if (pong &gt; TimeSpan.FromMilliseconds(1000))                return HealthCheckResult.Degraded($&quot;Redis响应缓慢: {pong.TotalMilliseconds}ms&quot;);                            return HealthCheckResult.Healthy($&quot;Redis连接正常: {pong.TotalMilliseconds}ms&quot;);        }        catch (Exception ex)        {            return HealthCheckResult.Unhealthy(&quot;Redis健康检查失败&quot;, ex);        }    }}// 注册健康检查builder.Services.AddHealthChecks()    .AddCheck&lt;RedisHealthCheck&gt;(&quot;redis&quot;);</code></pre><h2 id="%E6%80%BB%E7%BB%93" tabindex="-1">总结</h2><p>通过本篇的深入学习，我们掌握了构建生产级Redis应用所需的关键知识：</p><ol><li><strong>内存优化</strong>：通过合理的数据结构选择、编码配置和淘汰策略，最大化内存利用率</li><li><strong>集群部署</strong>：理解哈希槽分片原理，掌握集群的搭建、管理和扩展</li><li><strong>性能调优</strong>：利用监控工具、Pipeline和Lua脚本提升系统性能</li><li><strong>备份恢复</strong>：建立可靠的备份策略，确保数据安全</li><li><strong>安全配置</strong>：从网络、认证、命令等多个维度保障Redis安全</li><li><strong>故障诊断</strong>：掌握常见问题的排查方法和健康监控</li></ol><p><strong>关键收获：</strong></p><ul><li>生产环境的Redis需要综合考虑性能、可用性、安全性和可维护性</li><li>监控和预警是保障服务稳定的关键</li><li>合理的数据结构和配置可以大幅提升系统性能</li><li>安全配置不是可选项，而是生产部署的必备条件</li></ul><p>现在，你已经具备了构建和管理生产级Redis应用的全套技能，可以自信地应对各种复杂的业务场景和运维挑战！</p><hr /><p><strong>系列总结</strong></p><p>通过这五篇系列教程，我们从Redis的基础概念开始，逐步深入到了数据结构、持久化、<a href="http://Asp.Net" target="_blank">Asp.Net</a> Core集成、高级特性和生产运维。希望这个完整的系列能够帮助你在实际项目中充分发挥Redis的威力，构建出高性能、高可用的应用系统。</p><p>记住，技术的学习永无止境，保持好奇心和实践精神，你将在技术的道路上越走越远！</p><hr /><p><strong>延伸学习建议：</strong></p><ol><li>深入了解Redis模块系统（RedisJSON、RedisSearch等）</li><li>学习Redis Streams实现更复杂的消息处理模式</li><li>探索Redis在微服务架构中的服务发现和配置管理应用</li><li>研究Redis在实时数据分析场景的应用</li></ol><p>欢迎在评论区分享你在生产环境中使用Redis的经验和挑战！</p>]]>
                    </description>
                    <pubDate>Sun, 13 Apr 2025 08:50:13 EDT</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[Redis入门：Redis在项目中的实战应用]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2945</link>
                    <description>
                            <![CDATA[<h1 id="redis%E5%9C%A8asp.net-core%E9%A1%B9%E7%9B%AE%E4%B8%AD%E7%9A%84%E5%AE%9E%E6%88%98%E5%BA%94%E7%94%A8%EF%BC%9A%E4%BB%8E%E7%BC%93%E5%AD%98%E5%88%B0%E5%88%86%E5%B8%83%E5%BC%8F%E9%94%81" tabindex="-1"><a href="http://xn--RedisAsp-v86n.Net" target="_blank">Redis在Asp.Net</a> Core项目中的实战应用：从缓存到分布式锁</h1><blockquote><p>通过前几篇的学习，我们已经掌握了Redis的核心概念和数据持久化。但理论终究要落地到实践。今天，<a href="http://xn--RedisAsp-or1m03yh92ah9pfa01o461eii3b1p2h.Net" target="_blank">我们将把Redis真正集成到Asp.Net</a> Core项目中，解决真实业务场景中的性能瓶颈和分布式难题。</p></blockquote><p>在现代Web开发中，Redis早已不是可选项，而是构建高性能、高可用系统的必备组件。作为.Net开发者，<a href="http://xn--Asp-o59dt17afqd83wpta.Net" target="_blank">掌握如何在Asp.Net</a> Core中熟练使用Redis，是你进阶高级开发的必经之路。</p><h2 id="%E4%B8%80%E3%80%81%E7%8E%AF%E5%A2%83%E5%87%86%E5%A4%87%EF%BC%9A%E5%9C%A8asp.net-core%E4%B8%AD%E9%9B%86%E6%88%90redis" tabindex="-1">一、环境准备：<a href="http://xn--Asp-lp6e.Net" target="_blank">在Asp.Net</a> Core中集成Redis</h2><h3 id="1.-%E5%AE%89%E8%A3%85%E5%BF%85%E8%A6%81%E7%9A%84nuget%E5%8C%85" tabindex="-1">1. 安装必要的NuGet包</h3><p>首先，<a href="http://xn--Asp-x69dt96ajn1c.Net" target="_blank">在你的Asp.Net</a> Core项目中安装最常用的Redis客户端：</p><pre><code class="language-bash"># 使用Package Manager ConsoleInstall-Package StackExchange.Redis# 或使用.NET CLIdotnet add package StackExchange.Redis</code></pre><p><strong>为什么选择StackExchange.Redis？</strong></p><ul><li>高性能且线程安全</li><li>支持同步和异步操作</li><li>活跃的社区支持和持续更新</li><li>Microsoft官方推荐</li></ul><h3 id="2.-%E9%85%8D%E7%BD%AEredis%E6%9C%8D%E5%8A%A1" tabindex="-1">2. 配置Redis服务</h3><p>在<code>appsettings.json</code>中添加Redis连接字符串：</p><pre><code class="language-json">{  &quot;ConnectionStrings&quot;: {    &quot;Redis&quot;: &quot;localhost:6379,password=your_password,abortConnect=false,connectTimeout=30000&quot;  },  // 其他配置...}</code></pre><p>在<code>Program.cs</code>中注册Redis服务：</p><pre><code class="language-csharp">using StackExchange.Redis;var builder = WebApplication.CreateBuilder(args);// 添加Redis服务builder.Services.AddSingleton&lt;IConnectionMultiplexer&gt;(sp =&gt;     ConnectionMultiplexer.Connect(builder.Configuration.GetConnectionString(&quot;Redis&quot;)));// 注册自定义Redis服务（推荐）builder.Services.AddScoped&lt;IRedisService, RedisService&gt;();var app = builder.Build();</code></pre><h3 id="3.-%E5%88%9B%E5%BB%BAredis%E6%9C%8D%E5%8A%A1%E5%B0%81%E8%A3%85" tabindex="-1">3. 创建Redis服务封装</h3><p>为了更好地使用Redis，我们创建一个服务封装类：</p><pre><code class="language-csharp">public interface IRedisService{    Task&lt;T&gt; GetAsync&lt;T&gt;(string key);    Task SetAsync&lt;T&gt;(string key, T value, TimeSpan? expiry = null);    Task&lt;bool&gt; RemoveAsync(string key);    Task&lt;bool&gt; ExistsAsync(string key);}public class RedisService : IRedisService{    private readonly IConnectionMultiplexer _redis;    private readonly IDatabase _database;    public RedisService(IConnectionMultiplexer redis)    {        _redis = redis;        _database = redis.GetDatabase();    }    public async Task&lt;T&gt; GetAsync&lt;T&gt;(string key)    {        var value = await _database.StringGetAsync(key);        if (value.IsNullOrEmpty)            return default;        return JsonSerializer.Deserialize&lt;T&gt;(value);    }    public async Task SetAsync&lt;T&gt;(string key, T value, TimeSpan? expiry = null)    {        var serializedValue = JsonSerializer.Serialize(value);        await _database.StringSetAsync(key, serializedValue, expiry);    }    public async Task&lt;bool&gt; RemoveAsync(string key)    {        return await _database.KeyDeleteAsync(key);    }    public async Task&lt;bool&gt; ExistsAsync(string key)    {        return await _database.KeyExistsAsync(key);    }}</code></pre><p>现在，我们的基础环境已经搭建完成，可以开始实战应用了！</p><h2 id="%E4%BA%8C%E3%80%81%E7%BC%93%E5%AD%98%E5%AE%9E%E6%88%98%EF%BC%9A%E6%8F%90%E5%8D%87%E7%B3%BB%E7%BB%9F%E6%80%A7%E8%83%BD%E7%9A%84%E5%88%A9%E5%99%A8" tabindex="-1">二、缓存实战：提升系统性能的利器</h2><p>缓存是Redis最经典的应用场景。让我们来看几个实际的例子。</p><h3 id="1.-%E5%95%86%E5%93%81%E4%BF%A1%E6%81%AF%E7%BC%93%E5%AD%98" tabindex="-1">1. 商品信息缓存</h3><p>假设我们有一个电商系统，商品信息的查询非常频繁：</p><pre><code class="language-csharp">public interface IProductService{    Task&lt;Product&gt; GetProductByIdAsync(int productId);    Task UpdateProductAsync(Product product);}public class ProductService : IProductService{    private readonly IProductRepository _productRepository;    private readonly IRedisService _redisService;    private readonly ILogger&lt;ProductService&gt; _logger;    public ProductService(IProductRepository productRepository,                          IRedisService redisService,                         ILogger&lt;ProductService&gt; logger)    {        _productRepository = productRepository;        _redisService = redisService;        _logger = logger;    }    public async Task&lt;Product&gt; GetProductByIdAsync(int productId)    {        var cacheKey = $&quot;product:{productId}&quot;;                // 1. 先查缓存        var product = await _redisService.GetAsync&lt;Product&gt;(cacheKey);        if (product != null)        {            _logger.LogInformation(&quot;从缓存获取商品 {ProductId}&quot;, productId);            return product;        }        // 2. 缓存不存在，查询数据库        _logger.LogInformation(&quot;缓存未命中，从数据库查询商品 {ProductId}&quot;, productId);        product = await _productRepository.GetByIdAsync(productId);        if (product == null)            return null;        // 3. 写入缓存，设置30分钟过期        await _redisService.SetAsync(cacheKey, product, TimeSpan.FromMinutes(30));                return product;    }    public async Task UpdateProductAsync(Product product)    {        // 更新数据库        await _productRepository.UpdateAsync(product);                // 删除缓存，保证数据一致性        var cacheKey = $&quot;product:{product.Id}&quot;;        await _redisService.RemoveAsync(cacheKey);                _logger.LogInformation(&quot;更新商品 {ProductId} 并清除缓存&quot;, product.Id);    }}</code></pre><p><strong>缓存策略分析</strong>：</p><ul><li><strong>读取时</strong>：先查缓存，命中则返回；未命中查数据库并回写缓存</li><li><strong>更新时</strong>：先更新数据库，再删除缓存（Cache-Aside模式）</li><li><strong>过期时间</strong>：设置合理的过期时间，防止数据长期不更新</li></ul><h3 id="2.-%E5%9C%A8controller%E4%B8%AD%E4%BD%BF%E7%94%A8%E7%BC%93%E5%AD%98%E6%9C%8D%E5%8A%A1" tabindex="-1">2. 在Controller中使用缓存服务</h3><pre><code class="language-csharp">[ApiController][Route(&quot;api/[controller]&quot;)]public class ProductsController : ControllerBase{    private readonly IProductService _productService;    public ProductsController(IProductService productService)    {        _productService = productService;    }    [HttpGet(&quot;{id}&quot;)]    public async Task&lt;ActionResult&lt;Product&gt;&gt; GetProduct(int id)    {        var product = await _productService.GetProductByIdAsync(id);        if (product == null)            return NotFound();                    return product;    }    [HttpPut(&quot;{id}&quot;)]    public async Task&lt;IActionResult&gt; UpdateProduct(int id, Product product)    {        if (id != product.Id)            return BadRequest();                    await _productService.UpdateProductAsync(product);        return NoContent();    }}</code></pre><h2 id="%E4%B8%89%E3%80%81%E5%BA%94%E5%AF%B9%E7%BC%93%E5%AD%98%22%E4%B8%89%E5%89%91%E5%AE%A2%22%EF%BC%9A%E7%A9%BF%E9%80%8F%E3%80%81%E5%87%BB%E7%A9%BF%E3%80%81%E9%9B%AA%E5%B4%A9" tabindex="-1">三、应对缓存&quot;三剑客&quot;：穿透、击穿、雪崩</h2><p>在实际生产环境中，仅仅实现基础缓存是不够的，我们还需要应对三个经典问题。</p><h3 id="1.-%E7%BC%93%E5%AD%98%E7%A9%BF%E9%80%8F%EF%BC%9A%E6%9F%A5%E8%AF%A2%E4%B8%8D%E5%AD%98%E5%9C%A8%E7%9A%84%E6%95%B0%E6%8D%AE" tabindex="-1">1. 缓存穿透：查询不存在的数据</h3><p><strong>问题</strong>：恶意请求查询数据库中不存在的数据，导致请求直接打到数据库。</p><p><strong>解决方案</strong>：缓存空对象</p><pre><code class="language-csharp">public async Task&lt;Product&gt; GetProductByIdWithNullCacheAsync(int productId){    var cacheKey = $&quot;product:{productId}&quot;;        var product = await _redisService.GetAsync&lt;Product&gt;(cacheKey);    if (product != null)    {        // 如果是特殊的空对象标记，返回null        if (product.Id == -1)            return null;                    return product;    }    product = await _productRepository.GetByIdAsync(productId);    if (product == null)    {        // 缓存空对象，设置较短的过期时间        var nullProduct = new Product { Id = -1 }; // 特殊标记        await _redisService.SetAsync(cacheKey, nullProduct, TimeSpan.FromMinutes(5));        return null;    }    await _redisService.SetAsync(cacheKey, product, TimeSpan.FromMinutes(30));    return product;}</code></pre><h3 id="2.-%E7%BC%93%E5%AD%98%E5%87%BB%E7%A9%BF%EF%BC%9A%E7%83%AD%E7%82%B9key%E7%AA%81%E7%84%B6%E5%A4%B1%E6%95%88" tabindex="-1">2. 缓存击穿：热点Key突然失效</h3><p><strong>问题</strong>：某个热点Key在失效的瞬间，大量请求同时到达数据库。</p><p><strong>解决方案</strong>：使用互斥锁</p><pre><code class="language-csharp">public async Task&lt;Product&gt; GetProductWithMutexAsync(int productId){    var cacheKey = $&quot;product:{productId}&quot;;    var mutexKey = $&quot;mutex:product:{productId}&quot;;        // 尝试获取缓存    var product = await _redisService.GetAsync&lt;Product&gt;(cacheKey);    if (product != null)        return product;    // 使用Redis实现分布式锁    var lockToken = Guid.NewGuid().ToString();    var locked = await _redisService.AcquireLockAsync(mutexKey, lockToken, TimeSpan.FromSeconds(5));        if (!locked)    {        // 获取锁失败，稍后重试        await Task.Delay(100);        return await GetProductWithMutexAsync(productId);    }    try    {        // 双重检查，防止重复查询数据库        product = await _redisService.GetAsync&lt;Product&gt;(cacheKey);        if (product != null)            return product;        // 查询数据库        product = await _productRepository.GetByIdAsync(productId);        if (product != null)        {            await _redisService.SetAsync(cacheKey, product, TimeSpan.FromMinutes(30));        }                return product;    }    finally    {        // 释放锁        await _redisService.ReleaseLockAsync(mutexKey, lockToken);    }}</code></pre><h3 id="3.-%E7%BC%93%E5%AD%98%E9%9B%AA%E5%B4%A9%EF%BC%9A%E5%A4%A7%E9%87%8Fkey%E5%90%8C%E6%97%B6%E5%A4%B1%E6%95%88" tabindex="-1">3. 缓存雪崩：大量Key同时失效</h3><p><strong>问题</strong>：大量缓存Key在同一时间失效，导致所有请求直接访问数据库。</p><p><strong>解决方案</strong>：设置不同的过期时间</p><pre><code class="language-csharp">public async Task SetWithRandomExpiryAsync&lt;T&gt;(string key, T value, TimeSpan baseExpiry){    // 在基础过期时间上增加随机偏差（±10%）    var random = new Random();    var variance = (int)(baseExpiry.TotalMinutes * 0.1); // 10% 偏差    var actualExpiry = baseExpiry.Add(TimeSpan.FromMinutes(random.Next(-variance, variance)));        await _redisService.SetAsync(key, value, actualExpiry);}</code></pre><h2 id="%E5%9B%9B%E3%80%81%E5%88%86%E5%B8%83%E5%BC%8F%E9%94%81%EF%BC%9A%E6%8E%A7%E5%88%B6%E5%88%86%E5%B8%83%E5%BC%8F%E7%8E%AF%E5%A2%83%E4%B8%8B%E7%9A%84%E8%B5%84%E6%BA%90%E8%AE%BF%E9%97%AE" tabindex="-1">四、分布式锁：控制分布式环境下的资源访问</h2><p>在分布式系统中，我们需要控制多个服务实例对共享资源的访问。</p><h3 id="1.-%E6%89%A9%E5%B1%95redis%E6%9C%8D%E5%8A%A1%E6%94%AF%E6%8C%81%E5%88%86%E5%B8%83%E5%BC%8F%E9%94%81" tabindex="-1">1. 扩展Redis服务支持分布式锁</h3><pre><code class="language-csharp">public interface IRedisService{    // ... 其他方法        Task&lt;bool&gt; AcquireLockAsync(string key, string value, TimeSpan expiry);    Task&lt;bool&gt; ReleaseLockAsync(string key, string value);    Task&lt;bool&gt; ExtendLockAsync(string key, string value, TimeSpan expiry);}public class RedisService : IRedisService{    // ... 其他实现    public async Task&lt;bool&gt; AcquireLockAsync(string key, string value, TimeSpan expiry)    {        // 使用SET NX EX命令原子性地获取锁        return await _database.StringSetAsync(key, value, expiry, When.NotExists);    }    public async Task&lt;bool&gt; ReleaseLockAsync(string key, string value)    {        // 使用Lua脚本保证原子性：只有锁的值匹配时才删除        var luaScript = @&quot;            if redis.call(&#39;GET&#39;, KEYS[1]) == ARGV[1] then                return redis.call(&#39;DEL&#39;, KEYS[1])            else                return 0            end&quot;;        var result = await _database.ScriptEvaluateAsync(luaScript, new RedisKey[] { key }, new RedisValue[] { value });        return (int)result == 1;    }    public async Task&lt;bool&gt; ExtendLockAsync(string key, string value, TimeSpan expiry)    {        var luaScript = @&quot;            if redis.call(&#39;GET&#39;, KEYS[1]) == ARGV[1] then                return redis.call(&#39;EXPIRE&#39;, KEYS[1], ARGV[2])            else                return 0            end&quot;;        var result = await _database.ScriptEvaluateAsync(luaScript,             new RedisKey[] { key },             new RedisValue[] { value, (int)expiry.TotalSeconds });                    return (bool)result;    }}</code></pre><h3 id="2.-%E4%BD%BF%E7%94%A8%E5%88%86%E5%B8%83%E5%BC%8F%E9%94%81%E5%AE%9E%E7%8E%B0%E7%A7%92%E6%9D%80%E5%8A%9F%E8%83%BD" tabindex="-1">2. 使用分布式锁实现秒杀功能</h3><pre><code class="language-csharp">public class SeckillService{    private readonly IRedisService _redisService;    private readonly IOrderRepository _orderRepository;    public SeckillService(IRedisService redisService, IOrderRepository orderRepository)    {        _redisService = redisService;        _orderRepository = orderRepository;    }    public async Task&lt;bool&gt; ProcessSeckillAsync(int productId, int userId)    {        var lockKey = $&quot;seckill:lock:{productId}&quot;;        var lockValue = Guid.NewGuid().ToString();        var stockKey = $&quot;product:stock:{productId}&quot;;        try        {            // 获取分布式锁            var locked = await _redisService.AcquireLockAsync(lockKey, lockValue, TimeSpan.FromSeconds(10));            if (!locked)                return false; // 获取锁失败，稍后重试            // 检查库存            var stock = await _redisService.GetAsync&lt;int&gt;(stockKey);            if (stock &lt;= 0)                return false;            // 扣减库存            await _redisService.SetAsync(stockKey, stock - 1);            // 创建订单            await _orderRepository.CreateAsync(new Order            {                ProductId = productId,                UserId = userId,                CreatedAt = DateTime.UtcNow            });            return true;        }        finally        {            // 释放锁            await _redisService.ReleaseLockAsync(lockKey, lockValue);        }    }}</code></pre><h2 id="%E4%BA%94%E3%80%81%E4%BC%9A%E8%AF%9D%E5%AD%98%E5%82%A8%EF%BC%9A%E5%AE%9E%E7%8E%B0%E5%88%86%E5%B8%83%E5%BC%8Fsession" tabindex="-1">五、会话存储：实现分布式Session</h2><p>在微服务架构中，我们需要在多台服务器之间共享用户会话状态。</p><h3 id="1.-%E9%85%8D%E7%BD%AEredis%E4%BD%9C%E4%B8%BA%E5%88%86%E5%B8%83%E5%BC%8Fsession%E5%AD%98%E5%82%A8" tabindex="-1">1. 配置Redis作为分布式Session存储</h3><p>在<code>Program.cs</code>中：</p><pre><code class="language-csharp">// 添加Redis分布式缓存builder.Services.AddStackExchangeRedisCache(options =&gt;{    options.Configuration = builder.Configuration.GetConnectionString(&quot;Redis&quot;);    options.InstanceName = &quot;MyApp_&quot;;});// 配置Sessionbuilder.Services.AddSession(options =&gt;{    options.IdleTimeout = TimeSpan.FromMinutes(30);    options.Cookie.HttpOnly = true;    options.Cookie.IsEssential = true;});</code></pre><p>在Controller中使用：</p><pre><code class="language-csharp">public class AccountController : Controller{    [HttpPost]    public async Task&lt;IActionResult&gt; Login(LoginModel model)    {        // 验证用户...        var user = await AuthenticateUserAsync(model);        if (user == null)            return Unauthorized();        // 存储用户信息到Session        HttpContext.Session.SetString(&quot;UserId&quot;, user.Id.ToString());        HttpContext.Session.SetString(&quot;UserName&quot;, user.UserName);        HttpContext.Session.SetInt32(&quot;UserRole&quot;, (int)user.Role);        return RedirectToAction(&quot;Index&quot;, &quot;Home&quot;);    }    [HttpGet]    public IActionResult GetUserInfo()    {        if (!HttpContext.Session.TryGetValue(&quot;UserId&quot;, out _))            return Unauthorized();        var userInfo = new        {            UserId = HttpContext.Session.GetString(&quot;UserId&quot;),            UserName = HttpContext.Session.GetString(&quot;UserName&quot;),            Role = HttpContext.Session.GetInt32(&quot;UserRole&quot;)        };        return Ok(userInfo);    }}</code></pre><h2 id="%E5%85%AD%E3%80%81%E6%B6%88%E6%81%AF%E9%98%9F%E5%88%97%EF%BC%9A%E5%AE%9E%E7%8E%B0%E5%BC%82%E6%AD%A5%E4%BB%BB%E5%8A%A1%E5%A4%84%E7%90%86" tabindex="-1">六、消息队列：实现异步任务处理</h2><p>虽然Redis不是专业的消息队列，但对于简单的场景非常实用。</p><h3 id="1.-%E5%9F%BA%E4%BA%8Elist%E5%AE%9E%E7%8E%B0%E7%AE%80%E5%8D%95%E6%B6%88%E6%81%AF%E9%98%9F%E5%88%97" tabindex="-1">1. 基于List实现简单消息队列</h3><pre><code class="language-csharp">public interface IMessageQueueService{    Task PublishAsync&lt;T&gt;(string queueName, T message);    Task&lt;T&gt; ConsumeAsync&lt;T&gt;(string queueName, TimeSpan? timeout = null);}public class RedisMessageQueueService : IMessageQueueService{    private readonly IConnectionMultiplexer _redis;    private readonly IDatabase _database;    public RedisMessageQueueService(IConnectionMultiplexer redis)    {        _redis = redis;        _database = redis.GetDatabase();    }    public async Task PublishAsync&lt;T&gt;(string queueName, T message)    {        var serializedMessage = JsonSerializer.Serialize(message);        await _database.ListLeftPushAsync(queueName, serializedMessage);    }    public async Task&lt;T&gt; ConsumeAsync&lt;T&gt;(string queueName, TimeSpan? timeout = null)    {        var value = await _database.ListRightPopAsync(queueName);        if (value.IsNullOrEmpty)        {            if (timeout.HasValue)            {                // 可以实现阻塞版本的消费                // 这里简化处理，返回默认值                await Task.Delay(timeout.Value);            }            return default;        }        return JsonSerializer.Deserialize&lt;T&gt;(value);    }}</code></pre><h3 id="2.-%E4%BD%BF%E7%94%A8%E6%B6%88%E6%81%AF%E9%98%9F%E5%88%97%E5%A4%84%E7%90%86%E8%80%97%E6%97%B6%E4%BB%BB%E5%8A%A1" tabindex="-1">2. 使用消息队列处理耗时任务</h3><pre><code class="language-csharp">public class EmailService{    private readonly IMessageQueueService _messageQueue;    private readonly ILogger&lt;EmailService&gt; _logger;    public EmailService(IMessageQueueService messageQueue, ILogger&lt;EmailService&gt; logger)    {        _messageQueue = messageQueue;        _logger = logger;    }    // 发送邮件到队列（非阻塞）    public async Task SendWelcomeEmailAsync(string email, string userName)    {        var emailMessage = new EmailMessage        {            To = email,            Subject = &quot;欢迎注册&quot;,            Body = $&quot;亲爱的 {userName}，欢迎使用我们的服务！&quot;        };        await _messageQueue.PublishAsync(&quot;email_queue&quot;, emailMessage);        _logger.LogInformation(&quot;欢迎邮件已加入队列，收件人: {Email}&quot;, email);    }}// 后台服务处理队列中的邮件public class EmailBackgroundService : BackgroundService{    private readonly IMessageQueueService _messageQueue;    private readonly IEmailSender _emailSender;    private readonly ILogger&lt;EmailBackgroundService&gt; _logger;    public EmailBackgroundService(IMessageQueueService messageQueue,                                  IEmailSender emailSender,                                 ILogger&lt;EmailBackgroundService&gt; logger)    {        _messageQueue = messageQueue;        _emailSender = emailSender;        _logger = logger;    }    protected override async Task ExecuteAsync(CancellationToken stoppingToken)    {        while (!stoppingToken.IsCancellationRequested)        {            try            {                var message = await _messageQueue.ConsumeAsync&lt;EmailMessage&gt;(&quot;email_queue&quot;);                if (message != null)                {                    await _emailSender.SendEmailAsync(message.To, message.Subject, message.Body);                    _logger.LogInformation(&quot;邮件发送成功: {To}&quot;, message.To);                }                else                {                    // 队列为空，等待一段时间                    await Task.Delay(1000, stoppingToken);                }            }            catch (Exception ex)            {                _logger.LogError(ex, &quot;处理邮件队列时发生错误&quot;);                await Task.Delay(5000, stoppingToken); // 错误时等待更长时间            }        }    }}</code></pre><h2 id="%E4%B8%83%E3%80%81%E6%80%A7%E8%83%BD%E4%BC%98%E5%8C%96%E4%B8%8E%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5" tabindex="-1">七、性能优化与最佳实践</h2><h3 id="1.-%E8%BF%9E%E6%8E%A5%E5%A4%8D%E7%94%A8" tabindex="-1">1. 连接复用</h3><p>确保在整个应用程序中复用<code>IConnectionMultiplexer</code>实例：</p><pre><code class="language-csharp">// 在Program.cs中注册为Singletonbuilder.Services.AddSingleton&lt;IConnectionMultiplexer&gt;(sp =&gt;     ConnectionMultiplexer.Connect(builder.Configuration.GetConnectionString(&quot;Redis&quot;)));</code></pre><h3 id="2.-%E4%BD%BF%E7%94%A8pipeline%E6%89%B9%E9%87%8F%E6%93%8D%E4%BD%9C" tabindex="-1">2. 使用Pipeline批量操作</h3><pre><code class="language-csharp">public async Task&lt;bool&gt; SetMultipleAsync(Dictionary&lt;string, object&gt; keyValuePairs, TimeSpan? expiry = null){    var batch = _database.CreateBatch();        var tasks = new List&lt;Task&gt;();    foreach (var kvp in keyValuePairs)    {        var serializedValue = JsonSerializer.Serialize(kvp.Value);        tasks.Add(batch.StringSetAsync(kvp.Key, serializedValue, expiry));    }        batch.Execute();    await Task.WhenAll(tasks);        return tasks.All(t =&gt; t.IsCompletedSuccessfully);}</code></pre><h3 id="3.-%E5%90%88%E7%90%86%E7%9A%84%E5%BA%8F%E5%88%97%E5%8C%96%E9%80%89%E6%8B%A9" tabindex="-1">3. 合理的序列化选择</h3><pre><code class="language-csharp">// 对于简单类型，考虑使用更高效的序列化方式public async Task SetStringAsync(string key, string value, TimeSpan? expiry = null){    // 对于字符串，直接存储，避免JSON序列化开销    await _database.StringSetAsync(key, value, expiry);}public async Task&lt;string&gt; GetStringAsync(string key){    return await _database.StringGetAsync(key);}</code></pre><h2 id="%E6%80%BB%E7%BB%93" tabindex="-1">总结</h2><p>通过本篇的实战演练，<a href="http://xn--Asp-0h9d7rj90bsssxwc4va.Net" target="_blank">我们掌握了在Asp.Net</a> Core项目中集成和使用Redis的完整方案：</p><ol><li><strong>环境搭建</strong>：配置StackExchange.Redis客户端</li><li><strong>缓存应用</strong>：商品信息缓存及缓存策略</li><li><strong>问题解决</strong>：应对缓存穿透、击穿、雪崩的完整方案</li><li><strong>分布式锁</strong>：实现秒杀等并发控制场景</li><li><strong>会话存储</strong>：配置分布式Session</li><li><strong>消息队列</strong>：实现异步任务处理</li><li><strong>性能优化</strong>：连接复用、批量操作等最佳实践</li></ol><p><strong>关键收获</strong>：</p><ul><li><a href="http://xn--RedisAsp-v86n.Net" target="_blank">Redis在Asp.Net</a> Core中的集成非常简单直接</li><li>合理使用缓存可以大幅提升系统性能</li><li>分布式锁是解决并发问题的利器</li><li>选择合适的序列化方式对性能有重要影响</li></ul><p>现在，<a href="http://xn--Asp-vs9d0vb61gh1vhqhjcw410clu2a.Net" target="_blank">你可以自信地在你的Asp.Net</a> Core项目中使用Redis来解决实际的性能瓶颈和分布式协调问题了！</p><p>欢迎在评论区分享你在集成过程中遇到的问题和解决方案！</p>]]>
                    </description>
                    <pubDate>Sat, 05 Apr 2025 08:43:42 EDT</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[Redis入门：Redis的持久化与高可用]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2944</link>
                    <description>
                            <![CDATA[<h1 id="redis%E7%9A%84%E6%8C%81%E4%B9%85%E5%8C%96%E4%B8%8E%E9%AB%98%E5%8F%AF%E7%94%A8%EF%BC%9A%E5%A6%82%E4%BD%95%E9%81%BF%E5%85%8D%22%E5%86%85%E5%AD%98%E5%A4%B1%E5%BF%86%22%EF%BC%9F" tabindex="-1">Redis的持久化与高可用：如何避免&quot;内存失忆&quot;？</h1><blockquote><p>在之前的文章中，我们领略了Redis基于内存的极速性能。但这也引出了一个关键问题：<strong>如果服务器重启或宕机，内存中的数据岂不是会全部丢失？</strong> 今天，我们就来深入探讨Redis如何解决这个&quot;阿喀琉斯之踵&quot;，以及如何构建高可用的Redis架构。</p></blockquote><h2 id="%E4%B8%80%E3%80%81%E6%95%B0%E6%8D%AE%E6%8C%81%E4%B9%85%E5%8C%96%EF%BC%9A%E4%B8%BA%E4%BB%80%E4%B9%88%E9%9C%80%E8%A6%81%22%E8%AE%B0%E5%BF%86%E5%A4%87%E4%BB%BD%22%EF%BC%9F" tabindex="-1">一、数据持久化：为什么需要&quot;记忆备份&quot;？</h2><p>想象一下，Redis就像一个拥有&quot;超强记忆力&quot;的天才，但它的记忆只存在于脑海中（内存）。一旦受到冲击（重启/宕机），所有记忆都会消失。为了避免这种&quot;失忆&quot;悲剧，Redis提供了两种&quot;记忆备份&quot;机制：<strong>RDB</strong>和<strong>AOF</strong>。</p><p><strong>持久化的本质</strong>：将内存中的数据以某种形式保存到磁盘中，确保在服务重启后能够恢复数据。</p><h2 id="%E4%BA%8C%E3%80%81rdb%E6%8C%81%E4%B9%85%E5%8C%96%EF%BC%9A%E7%BB%99%E6%95%B0%E6%8D%AE%E6%8B%8D%22%E5%BF%AB%E7%85%A7%22" tabindex="-1">二、RDB持久化：给数据拍&quot;快照&quot;</h2><h3 id="1.-%E5%B7%A5%E4%BD%9C%E5%8E%9F%E7%90%86" tabindex="-1">1. 工作原理</h3><p>RDB（Redis DataBase）的机制很简单：<strong>在特定时间点，将内存中所有数据生成一个快照文件</strong>保存到磁盘。这个文件通常以<code>.rdb</code>为后缀。</p><p>你可以把它理解为<strong>给整个数据库拍一张全家福</strong>，照片记录了那个瞬间的所有数据状态。</p><h3 id="2.-%E8%A7%A6%E5%8F%91%E6%9C%BA%E5%88%B6" tabindex="-1">2. 触发机制</h3><p>RDB有三种主要的触发方式：</p><h4 id="%E8%87%AA%E5%8A%A8%E8%A7%A6%E5%8F%91%EF%BC%88%E9%85%8D%E7%BD%AE%E7%AD%96%E7%95%A5%EF%BC%89" tabindex="-1">自动触发（配置策略）</h4><p>在<code>redis.conf</code>配置文件中，我们可以设置自动触发快照的条件：</p><pre><code class="language-bash"># 在900秒（15分钟）内，如果至少有1个key发生变化，则触发bgsavesave 900 1# 在300秒（5分钟）内，如果至少有10个key发生变化，则触发bgsave  save 300 10# 在60秒内，如果至少有10000个key发生变化，则触发bgsavesave 60 10000</code></pre><h4 id="%E6%89%8B%E5%8A%A8%E8%A7%A6%E5%8F%91" tabindex="-1">手动触发</h4><pre><code class="language-bash"># 1. SAVE命令（同步）- 阻塞主进程，直到快照完成，期间不处理任何请求127.0.0.1:6379&gt; SAVEOK# 2. BGSAVE命令（异步）- 后台执行快照，主进程继续处理请求127.0.0.1:6379&gt; BGSAVEBackground saving started</code></pre><h4 id="%E5%85%B6%E4%BB%96%E6%83%85%E5%86%B5" tabindex="-1">其他情况</h4><ul><li>执行<code>SHUTDOWN</code>命令关闭Redis时，如果没有开启AOF，会自动执行RDB快照</li><li>主从复制时，主节点会向从节点发送RDB文件进行全量同步</li></ul><h3 id="3.-rdb%E7%9A%84%E5%B7%A5%E4%BD%9C%E6%B5%81%E7%A8%8B%EF%BC%88bgsave%EF%BC%89" tabindex="-1">3. RDB的工作流程（BGSAVE）</h3><p>当我们执行<code>BGSAVE</code>时，Redis会：</p><ol><li><strong>Fork子进程</strong>：主进程创建一个子进程（copy-on-write机制，内存占用不会翻倍）</li><li><strong>子进程写盘</strong>：子进程将内存数据写入临时RDB文件</li><li><strong>替换旧文件</strong>：写入完成后，用新的RDB文件替换旧的</li><li><strong>清理工作</strong>：子进程退出，通知主进程完成</li></ol><h3 id="4.-%E4%BC%98%E7%BC%BA%E7%82%B9%E5%88%86%E6%9E%90" tabindex="-1">4. 优缺点分析</h3><p><strong>优点：</strong></p><ul><li>✅ <strong>性能影响小</strong>：BGSAVE通过子进程操作，主进程几乎不受影响</li><li>✅ <strong>文件紧凑</strong>：二进制格式，文件较小，适合备份和传输</li><li>✅ <strong>恢复速度快</strong>：恢复大数据集时比AOF快很多</li></ul><p><strong>缺点：</strong></p><ul><li>❌ <strong>可能丢失数据</strong>：两次快照之间的数据修改会丢失</li><li>❌ <strong>Fork可能阻塞</strong>：数据集很大时，fork操作本身可能耗时较长</li></ul><h2 id="%E4%B8%89%E3%80%81aof%E6%8C%81%E4%B9%85%E5%8C%96%EF%BC%9A%E8%AE%B0%E5%BD%95%E6%AF%8F%E4%B8%80%E4%B8%AA%22%E6%88%90%E9%95%BF%E7%9E%AC%E9%97%B4%22" tabindex="-1">三、AOF持久化：记录每一个&quot;成长瞬间&quot;</h2><h3 id="1.-%E5%B7%A5%E4%BD%9C%E5%8E%9F%E7%90%86-1" tabindex="-1">1. 工作原理</h3><p>AOF（Append Only File）采用了一种完全不同的思路：<strong>记录每一个写操作命令</strong>，以日志的形式追加到文件末尾。</p><p>这就像是写<strong>日记</strong>，记录下每一天发生的事情，而不是只拍几张照片。</p><h3 id="2.-%E9%85%8D%E7%BD%AE%E8%AF%A6%E8%A7%A3" tabindex="-1">2. 配置详解</h3><p>要开启AOF，需要在配置文件中设置：</p><pre><code class="language-bash"># 开启AOF持久化appendonly yes# AOF文件名appendfilename &quot;appendonly.aof&quot;# 同步策略appendfsync everysec</code></pre><h3 id="3.-aof%E7%9A%84%E4%B8%89%E7%A7%8D%E5%90%8C%E6%AD%A5%E7%AD%96%E7%95%A5" tabindex="-1">3. AOF的三种同步策略</h3><table><thead><tr><th>策略</th><th>机制</th><th>数据安全性</th><th>性能影响</th></tr></thead><tbody><tr><td><strong>always</strong></td><td>每个写命令都同步到磁盘</td><td><strong>最高</strong>，最多丢失一个命令</td><td><strong>最差</strong>，每次写都要磁盘IO</td></tr><tr><td><strong>everysec</strong></td><td>每秒同步一次（默认）</td><td><strong>平衡</strong>，最多丢失一秒数据</td><td><strong>良好</strong>，性能与安全的折中</td></tr><tr><td><strong>no</strong></td><td>由操作系统决定同步时机</td><td><strong>最低</strong>，可能丢失较多数据</td><td><strong>最好</strong>，完全异步</td></tr></tbody></table><p><strong>生产环境推荐使用<code>everysec</code></strong>，在性能和数据安全之间取得最佳平衡。</p><h3 id="4.-aof%E9%87%8D%E5%86%99%E6%9C%BA%E5%88%B6" tabindex="-1">4. AOF重写机制</h3><p>随着运行时间增长，AOF文件会越来越大，而且包含很多已经过期的命令（比如对同一个key的多次set）。为了解决这个问题，Redis提供了<strong>AOF重写</strong>机制。</p><p><strong>重写的本质</strong>：基于当前内存数据，生成一个新的、更精简的AOF文件，只包含恢复当前数据所需的最小命令集合。</p><pre><code class="language-bash"># 手动触发AOF重写127.0.0.1:6379&gt; BGREWRITEAOFBackground append only file rewriting started</code></pre><p><strong>自动重写配置</strong>：</p><pre><code class="language-bash"># 当AOF文件体积比上次重写后体积增长100%时，自动触发重写auto-aof-rewrite-percentage 100# AOF文件体积至少达到64MB时才会触发重写auto-aof-rewrite-min-size 64mb</code></pre><h3 id="5.-%E4%BC%98%E7%BC%BA%E7%82%B9%E5%88%86%E6%9E%90" tabindex="-1">5. 优缺点分析</h3><p><strong>优点：</strong></p><ul><li>✅ <strong>数据安全</strong>：配置合理时最多丢失1秒数据</li><li>✅ <strong>可读性强</strong>：AOF文件是文本格式，可以人工阅读和修改</li><li>✅ <strong>容错性好</strong>：即使文件尾部有损坏，也可以用<code>redis-check-aof</code>工具修复</li></ul><p><strong>缺点：</strong></p><ul><li>❌ <strong>文件较大</strong>：通常比RDB文件大</li><li>❌ <strong>恢复速度慢</strong>：需要重新执行所有命令，恢复大数据集时较慢</li><li>❌ <strong>性能影响</strong>：在高负载下，AOF可能比RDB稍慢</li></ul><h2 id="%E5%9B%9B%E3%80%81rdb-vs-aof%EF%BC%9A%E5%A6%82%E4%BD%95%E9%80%89%E6%8B%A9%EF%BC%9F" tabindex="-1">四、RDB vs AOF：如何选择？</h2><table><thead><tr><th>特性</th><th>RDB</th><th>AOF</th></tr></thead><tbody><tr><td><strong>数据安全性</strong></td><td>可能丢失几分钟数据</td><td>最多丢失1秒数据</td></tr><tr><td><strong>恢复速度</strong></td><td><strong>快</strong></td><td>慢</td></tr><tr><td><strong>文件大小</strong></td><td><strong>小</strong>（压缩的二进制）</td><td>大（文本命令）</td></tr><tr><td><strong>性能影响</strong></td><td>BGSAVE时影响小</td><td>写入时有一定开销</td></tr><tr><td><strong>灾难恢复</strong></td><td>适合</td><td>更适合</td></tr><tr><td><strong>可读性</strong></td><td>不可读</td><td><strong>可读</strong></td></tr></tbody></table><h3 id="%E7%94%9F%E4%BA%A7%E7%8E%AF%E5%A2%83%E6%8E%A8%E8%8D%90%E7%AD%96%E7%95%A5" tabindex="-1">生产环境推荐策略</h3><p><strong>两者结合使用，发挥各自优势：</strong></p><pre><code class="language-bash"># 在redis.conf中同时开启RDB和AOFsave 900 1save 300 10save 60 10000appendonly yesappendfsync everysecauto-aof-rewrite-percentage 100auto-aof-rewrite-min-size 64mb</code></pre><p><strong>这种组合的优势：</strong></p><ul><li>AOF保证数据安全性，最多丢失1秒数据</li><li>RDB用于冷备份、快速重启和主从同步</li><li>重启时优先使用AOF恢复（数据更完整），其次使用RDB</li></ul><h2 id="%E4%BA%94%E3%80%81%E4%B8%BB%E4%BB%8E%E5%A4%8D%E5%88%B6%EF%BC%9A%E6%95%B0%E6%8D%AE%E5%A4%87%E4%BB%BD%E4%B8%8E%E8%AF%BB%E5%86%99%E5%88%86%E7%A6%BB" tabindex="-1">五、主从复制：数据备份与读写分离</h2><p>单机Redis存在单点故障风险，主从复制是构建高可用架构的第一步。</p><h3 id="1.-%E4%BB%80%E4%B9%88%E6%98%AF%E4%B8%BB%E4%BB%8E%E5%A4%8D%E5%88%B6%EF%BC%9F" tabindex="-1">1. 什么是主从复制？</h3><ul><li><strong>主节点（Master）</strong>：负责写操作，将数据变化同步给从节点</li><li><strong>从节点（Slave/Replica）</strong>：复制主节点数据，负责读操作</li></ul><h3 id="2.-%E4%B8%BB%E4%BB%8E%E5%A4%8D%E5%88%B6%E7%9A%84%E5%B7%A5%E4%BD%9C%E5%8E%9F%E7%90%86" tabindex="-1">2. 主从复制的工作原理</h3><ol><li><strong>建立连接</strong>：从节点连接到主节点，发送<code>SYNC</code>命令</li><li><strong>全量同步</strong>：主节点执行BGSAVE生成RDB文件，发送给从节点</li><li><strong>增量同步</strong>：主节点将期间的写命令缓存起来，RDB传输完成后发送给从节点</li><li><strong>命令传播</strong>：之后主节点每收到写命令，就异步发送给从节点</li></ol><h3 id="3.-%E5%A6%82%E4%BD%95%E9%85%8D%E7%BD%AE%E4%B8%BB%E4%BB%8E%E5%A4%8D%E5%88%B6%EF%BC%9F" tabindex="-1">3. 如何配置主从复制？</h3><p>假设我们有：</p><ul><li>主节点：127.0.0.1:6379</li><li>从节点：127.0.0.1:6380</li></ul><p><strong>方法一：配置文件</strong><br />在从节点的<code>redis.conf</code>中添加：</p><pre><code class="language-bash">replicaof 127.0.0.1 6379# 或者老版本使用：slaveof 127.0.0.1 6379</code></pre><p><strong>方法二：运行时命令</strong></p><pre><code class="language-bash"># 在从节点上执行127.0.0.1:6380&gt; REPLICAOF 127.0.0.1 6379OK</code></pre><h3 id="4.-%E9%AA%8C%E8%AF%81%E4%B8%BB%E4%BB%8E%E7%8A%B6%E6%80%81" tabindex="-1">4. 验证主从状态</h3><pre><code class="language-bash"># 在主节点查看复制信息127.0.0.1:6379&gt; INFO replication# Replicationrole:masterconnected_slaves:1slave0:ip=127.0.0.1,port=6380,state=online,offset=1234,lag=0# 在从节点查看复制信息  127.0.0.1:6380&gt; INFO replication# Replicationrole:slavemaster_host:127.0.0.1master_port:6379master_link_status:up</code></pre><h3 id="5.-%E4%B8%BB%E4%BB%8E%E6%9E%B6%E6%9E%84%E7%9A%84%E4%BC%98%E5%8A%BF" tabindex="-1">5. 主从架构的优势</h3><ol><li><strong>数据冗余</strong>：从节点是主节点的完整备份</li><li><strong>读写分离</strong>：主节点负责写，从节点负责读，提升读性能</li><li><strong>故障恢复基础</strong>：为自动故障转移做准备</li></ol><h2 id="%E5%85%AD%E3%80%81%E5%93%A8%E5%85%B5%EF%BC%88sentinel%EF%BC%89%E6%A8%A1%E5%BC%8F%EF%BC%9A%E5%AE%9E%E7%8E%B0%E8%87%AA%E5%8A%A8%E6%95%85%E9%9A%9C%E8%BD%AC%E7%A7%BB" tabindex="-1">六、哨兵（Sentinel）模式：实现自动故障转移</h2><p>主从复制解决了数据备份问题，但如果主节点宕机，需要手动切换，这期间服务会不可用。哨兵模式就是为了解决这个问题。</p><h3 id="1.-%E5%93%A8%E5%85%B5%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F" tabindex="-1">1. 哨兵是什么？</h3><p>Redis Sentinel是一个<strong>分布式系统</strong>，用于管理多个Redis实例，主要功能包括：</p><ul><li><strong>监控</strong>：持续检查主从节点是否正常运行</li><li><strong>通知</strong>：当被监控的Redis实例出现问题时，向管理员发送告警</li><li><strong>自动故障转移</strong>：主节点故障时，自动将一个从节点提升为新主节点，并让其他从节点复制新主节点</li><li><strong>配置提供者</strong>：客户端连接哨兵获取当前的主节点地址</li></ul><h3 id="2.-%E5%93%A8%E5%85%B5%E9%9B%86%E7%BE%A4%E6%9E%B6%E6%9E%84" tabindex="-1">2. 哨兵集群架构</h3><p>通常我们会部署<strong>奇数个哨兵实例</strong>（如3个或5个），通过投票机制来决定是否进行故障转移，避免误判。</p><h3 id="3.-%E6%90%AD%E5%BB%BA%E5%93%A8%E5%85%B5%E6%A8%A1%E5%BC%8F" tabindex="-1">3. 搭建哨兵模式</h3><p>假设我们有：</p><ul><li>Redis主节点：127.0.0.1:6379</li><li>Redis从节点：127.0.0.1:6380、127.0.0.1:6381</li><li>哨兵节点：127.0.0.1:26379、127.0.0.1:26380、127.0.0.1:26381</li></ul><p><strong>创建哨兵配置文件</strong> <code>sentinel-26379.conf</code>：</p><pre><code class="language-bash">port 26379sentinel monitor mymaster 127.0.0.1 6379 2sentinel down-after-milliseconds mymaster 5000sentinel failover-timeout mymaster 60000sentinel parallel-syncs mymaster 1</code></pre><p><strong>参数解释：</strong></p><ul><li><code>sentinel monitor mymaster 127.0.0.1 6379 2</code>：监控名为mymaster的主节点，至少需要2个哨兵同意才判定主观下线</li><li><code>down-after-milliseconds</code>：5000毫秒无响应认为节点主观下线</li><li><code>failover-timeout</code>：故障转移超时时间</li><li><code>parallel-syncs</code>：故障转移后，同时向新主节点同步的从节点数量</li></ul><p><strong>启动哨兵：</strong></p><pre><code class="language-bash">redis-sentinel sentinel-26379.confredis-sentinel sentinel-26380.conf  redis-sentinel sentinel-26381.conf</code></pre><h3 id="4.-%E6%95%85%E9%9A%9C%E8%BD%AC%E7%A7%BB%E8%BF%87%E7%A8%8B" tabindex="-1">4. 故障转移过程</h3><ol><li><strong>主观下线</strong>：某个哨兵认为主节点不可用</li><li><strong>客观下线</strong>：多个哨兵（达到quorum数量）都认为主节点不可用</li><li><strong>选举领导者</strong>：哨兵之间选举一个领导者来执行故障转移</li><li><strong>故障转移</strong>：领导者哨兵选择一个合适的从节点提升为新主节点</li><li><strong>切换配置</strong>：通知其他从节点复制新主节点，更新客户端配置</li></ol><h3 id="5.-%E5%AE%A2%E6%88%B7%E7%AB%AF%E5%A6%82%E4%BD%95%E8%BF%9E%E6%8E%A5%E5%93%A8%E5%85%B5%EF%BC%9F" tabindex="-1">5. 客户端如何连接哨兵？</h3><p>客户端不再直接连接Redis节点，而是连接哨兵集群来获取当前的主节点地址。</p><p><strong>Java客户端示例（使用Jedis）：</strong></p><pre><code class="language-java">Set&lt;String&gt; sentinels = new HashSet&lt;&gt;();sentinels.add(&quot;127.0.0.1:26379&quot;);sentinels.add(&quot;127.0.0.1:26380&quot;); sentinels.add(&quot;127.0.0.1:26381&quot;);JedisSentinelPool pool = new JedisSentinelPool(&quot;mymaster&quot;, sentinels);try (Jedis jedis = pool.getResource()) {    // 现在操作的是当前的主节点    jedis.set(&quot;key&quot;, &quot;value&quot;);}</code></pre><h2 id="%E4%B8%83%E3%80%81%E6%8C%81%E4%B9%85%E5%8C%96%E4%B8%8E%E9%AB%98%E5%8F%AF%E7%94%A8%E9%85%8D%E7%BD%AE%E6%80%BB%E7%BB%93" tabindex="-1">七、持久化与高可用配置总结</h2><table><thead><tr><th>方案</th><th>数据安全</th><th>可用性</th><th>复杂度</th><th>适用场景</th></tr></thead><tbody><tr><td><strong>单机+持久化</strong></td><td>中</td><td>低</td><td>低</td><td>开发测试、非核心业务</td></tr><tr><td><strong>主从复制</strong></td><td>高</td><td>中</td><td>中</td><td>读多写少，需要备份</td></tr><tr><td><strong>哨兵模式</strong></td><td>高</td><td><strong>高</strong></td><td>中</td><td>生产环境通用方案</td></tr><tr><td><strong>Redis集群</strong></td><td>高</td><td><strong>极高</strong></td><td>高</td><td>海量数据、高并发</td></tr></tbody></table><h2 id="%E6%80%BB%E7%BB%93" tabindex="-1">总结</h2><p>通过本篇的学习，我们掌握了构建可靠Redis系统的核心技术：</p><ol><li><strong>持久化是基础</strong>：RDB提供快照备份，AOF保证命令不丢失，两者结合使用最稳妥</li><li><strong>主从复制提供冗余</strong>：数据多副本，读写分离提升性能</li><li><strong>哨兵实现高可用</strong>：自动故障转移，服务不中断</li></ol><p><strong>记住这个演进路径：</strong><br />单机Redis → 开启持久化 → 搭建主从复制 → 部署哨兵集群</p><p>现在，你的Redis已经不再是那个&quot;内存失忆&quot;的脆弱系统，而是一个具备数据持久化和自动故障恢复能力的高可用服务！</p><hr /><p><strong>思考与实践：</strong></p><ol><li>在你的开发环境中尝试配置RDB和AOF，观察文件生成情况</li><li>搭建一个一主二从的Redis环境，并验证数据同步</li><li>部署三节点哨兵集群，模拟主节点宕机，观察自动故障转移过程</li></ol><p>欢迎在评论区分享你在配置过程中遇到的问题和解决方案！</p>]]>
                    </description>
                    <pubDate>Thu, 27 Mar 2025 08:39:17 EDT</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[Redis入门：玩转Redis五大核心数据结构]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2943</link>
                    <description>
                            <![CDATA[<h1 id="%E7%8E%A9%E8%BD%ACredis%E4%BA%94%E5%A4%A7%E6%A0%B8%E5%BF%83%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84%EF%BC%9A%E4%BB%8E%E8%AE%A1%E6%95%B0%E5%99%A8%E5%88%B0%E6%8E%92%E8%A1%8C%E6%A6%9C" tabindex="-1">玩转Redis五大核心数据结构：从计数器到排行榜</h1><blockquote><p>在上一篇中，我们知道了Redis为什么这么快，以及如何搭建环境。但Redis真正的威力，来自于它丰富的数据结构。如果说简单的键值对是Redis的&quot;骨架&quot;，那么这些数据结构就是它的&quot;肌肉&quot;，让Redis能够优雅地解决各种复杂的业务场景。</p></blockquote><p>Redis最吸引人的地方在于，它不仅仅是一个简单的键值存储，而是一个<strong>数据结构服务器</strong>。今天，我们将深入探索Redis的五大核心数据结构：<strong>String（字符串）</strong>、<strong>Hash（哈希）</strong>、<strong>List（列表）</strong>、<strong>Set（集合）</strong> 和 <strong>Sorted Set（有序集合）</strong>。</p><h2 id="%E4%B8%80%E3%80%81string%EF%BC%88%E5%AD%97%E7%AC%A6%E4%B8%B2%EF%BC%89%EF%BC%9A%E4%B8%8D%E6%AD%A2%E6%98%AF%E6%96%87%E6%9C%AC" tabindex="-1">一、String（字符串）：不止是文本</h2><p>String是Redis最基本的数据类型，一个Key对应一个Value。但别被它的名字骗了——它不仅可以存储文本，还可以存储数字（整数或浮点数）甚至是二进制数据（如图片或序列化对象）。</p><h3 id="%E6%A0%B8%E5%BF%83%E5%91%BD%E4%BB%A4%E4%B8%8E%E7%89%B9%E6%80%A7" tabindex="-1">核心命令与特性</h3><pre><code class="language-bash"># 1. 基础设置与获取127.0.0.1:6379&gt; SET username &quot;redis_learner&quot;OK127.0.0.1:6379&gt; GET username&quot;redis_learner&quot;# 2. 数字操作 - Redis知道你是数字时会允许数学运算127.0.0.1:6379&gt; SET page_views 100OK127.0.0.1:6379&gt; INCR page_views        # 增加1(integer) 101127.0.0.1:6379&gt; INCRBY page_views 10   # 增加指定数值(integer) 111127.0.0.1:6379&gt; DECR page_views        # 减少1(integer) 110# 3. 批量操作 - 提升效率127.0.0.1:6379&gt; MSET user:1001:name &quot;Alice&quot; user:1001:age 25 user:1001:city &quot;Beijing&quot;OK127.0.0.1:6379&gt; MGET user:1001:name user:1001:age user:1001:city1) &quot;Alice&quot;2) &quot;25&quot;3) &quot;Beijing&quot;# 4. 条件设置 - 实现分布式锁的基础127.0.0.1:6379&gt; SETNX lock:order_123 &quot;client_1&quot;  # 只有当key不存在时才设置(integer) 1  # 设置成功127.0.0.1:6379&gt; SETNX lock:order_123 &quot;client_2&quot;(integer) 0  # 设置失败，因为key已存在</code></pre><h3 id="%E5%AE%9E%E6%88%98%E5%BA%94%E7%94%A8%E5%9C%BA%E6%99%AF" tabindex="-1">实战应用场景</h3><ol><li><strong>缓存</strong>：存储序列化的用户信息、页面片段等</li><li><strong>计数器</strong>：文章阅读量、用户点赞数、网站访问量</li><li><strong>分布式锁</strong>：通过<code>SETNX</code>实现简单的互斥锁</li><li><strong>会话存储</strong>：存储用户Session数据</li></ol><p><strong>性能提示</strong>：对于多个相关的键值对，使用<code>MSET</code>/<code>MGET</code>比多次<code>SET</code>/<code>GET</code>更高效，因为它减少了网络往返次数。</p><hr /><h2 id="%E4%BA%8C%E3%80%81hash%EF%BC%88%E5%93%88%E5%B8%8C%E8%A1%A8%EF%BC%89%EF%BC%9A%E5%AD%98%E5%82%A8%E5%AF%B9%E8%B1%A1%E7%9A%84%E6%9C%80%E4%BD%B3%E9%80%89%E6%8B%A9" tabindex="-1">二、Hash（哈希表）：存储对象的最佳选择</h2><p>如果你需要存储一个对象（如用户信息、商品信息），Hash是你的最佳选择。它类似于编程语言中的字典或Map，适合存储字段-值对的集合。</p><h3 id="%E4%B8%BA%E4%BB%80%E4%B9%88%E7%94%A8hash%E8%80%8C%E4%B8%8D%E7%94%A8%E5%A4%9A%E4%B8%AAstring%EF%BC%9F" tabindex="-1">为什么用Hash而不用多个String？</h3><p>假设我们要存储用户信息，有两种方案：</p><ul><li><p><strong>方案一（多个String）</strong>：</p><pre><code class="language-bash">SET user:1001:name &quot;Bob&quot;SET user:1001:age 30SET user:1001:email &quot;bob@example.com&quot;</code></pre></li><li><p><strong>方案二（一个Hash）</strong>：</p><pre><code class="language-bash">HSET user:1001 name &quot;Bob&quot; age 30 email &quot;bob@example.com&quot;</code></pre></li></ul><p><strong>Hash的优势</strong>：</p><ul><li><strong>内存效率更高</strong>：Redis对Hash有特殊优化，特别是字段较少时</li><li><strong>原子性操作</strong>：可以一次性获取或修改整个对象</li><li><strong>更少的Key</strong>：避免键空间膨胀，管理更方便</li></ul><h3 id="%E6%A0%B8%E5%BF%83%E5%91%BD%E4%BB%A4%E8%AF%A6%E8%A7%A3" tabindex="-1">核心命令详解</h3><pre><code class="language-bash"># 1. 设置和获取字段127.0.0.1:6379&gt; HSET product:1001 name &quot;iPhone 15&quot; price 5999 stock 100(integer) 3  # 返回设置的字段数量127.0.0.1:6379&gt; HGET product:1001 name&quot;iPhone 15&quot;127.0.0.1:6379&gt; HGETALL product:1001  # 获取所有字段和值1) &quot;name&quot;2) &quot;iPhone 15&quot;3) &quot;price&quot;4) &quot;5999&quot;5) &quot;stock&quot;6) &quot;100&quot;# 2. 批量操作127.0.0.1:6379&gt; HMGET product:1001 name price  # 获取多个字段1) &quot;iPhone 15&quot;2) &quot;5999&quot;# 3. 数字运算127.0.0.1:6379&gt; HINCRBY product:1001 stock -1  # 库存减1（售出一件）(integer) 99# 4. 检查字段127.0.0.1:6379&gt; HEXISTS product:1001 name(integer) 1127.0.0.1:6379&gt; HKEYS product:1001  # 获取所有字段名1) &quot;name&quot;2) &quot;price&quot;3) &quot;stock&quot;</code></pre><h3 id="%E5%AE%9E%E6%88%98%E5%BA%94%E7%94%A8%E5%9C%BA%E6%99%AF-1" tabindex="-1">实战应用场景</h3><ol><li><strong>用户信息存储</strong>：将用户的所有属性存储在一个Hash中</li><li><strong>商品信息</strong>：商品的名称、价格、库存等信息</li><li><strong>配置信息</strong>：系统的各种配置参数</li><li><strong>购物车</strong>：用户ID作为Key，商品ID和数量作为字段-值对</li></ol><hr /><h2 id="%E4%B8%89%E3%80%81list%EF%BC%88%E5%88%97%E8%A1%A8%EF%BC%89%EF%BC%9A%E5%AE%9E%E7%8E%B0%E7%AE%80%E5%8D%95%E7%9A%84%E6%B6%88%E6%81%AF%E9%98%9F%E5%88%97" tabindex="-1">三、List（列表）：实现简单的消息队列</h2><p>List是一个按插入顺序排序的字符串元素集合，你可以在列表的头部（左边）或尾部（右边）添加元素。<strong>Redis的List底层实现是双向链表</strong>，这意味着在头部和尾部添加元素的速度极快，但通过索引访问中间元素相对较慢。</p><h3 id="%E6%A0%B8%E5%BF%83%E5%91%BD%E4%BB%A4%E8%AF%A6%E8%A7%A3-1" tabindex="-1">核心命令详解</h3><pre><code class="language-bash"># 1. 从两端添加元素127.0.0.1:6379&gt; LPUSH tasks &quot;send_email&quot;      # 从左边添加(integer) 1127.0.0.1:6379&gt; LPUSH tasks &quot;process_image&quot;(integer) 2127.0.0.1:6379&gt; RPUSH tasks &quot;generate_report&quot; # 从右边添加(integer) 3# 此时列表：[&quot;process_image&quot;, &quot;send_email&quot;, &quot;generate_report&quot;]#               头(左)   &lt;---------&gt;   尾(右)# 2. 从两端弹出元素127.0.0.1:6379&gt; LPOP tasks  # 从左边弹出&quot;process_image&quot;127.0.0.1:6379&gt; RPOP tasks  # 从右边弹出&quot;generate_report&quot;# 3. 查看列表范围（不会弹出元素）127.0.0.1:6379&gt; LRANGE tasks 0 -1  # 查看所有元素，0表示开始，-1表示末尾1) &quot;send_email&quot;# 4. 阻塞操作 - 消息队列的核心# 从一个空列表中阻塞地等待元素，最多等待10秒127.0.0.1:6379&gt; BLPOP message_queue 10(nil)  # 10秒内没有元素，返回nil# 在另一个客户端执行：LPUSH message_queue &quot;new_message&quot;# 此时BLPOP会立即返回：&quot;message_queue&quot; &quot;new_message&quot;</code></pre><h3 id="%E5%AE%9E%E6%88%98%E5%BA%94%E7%94%A8%E5%9C%BA%E6%99%AF-2" tabindex="-1">实战应用场景</h3><ol><li><strong>消息队列</strong>：使用<code>LPUSH</code>添加任务，<code>BRPOP</code>阻塞获取任务</li><li><strong>最新消息列表</strong>：使用<code>LPUSH</code>添加新消息，<code>LRANGE 0 9</code>获取最新的10条</li><li><strong>历史记录</strong>：用户浏览历史、搜索历史</li><li><strong>文章评论列表</strong>：文章的评论按时间顺序排列</li></ol><p><strong>重要特性</strong>：List的阻塞操作(<code>BLPOP</code>, <code>BRPOP</code>)使其成为实现简单消息队列的理想选择，消费者可以在队列为空时等待，而不需要轮询。</p><hr /><h2 id="%E5%9B%9B%E3%80%81set%EF%BC%88%E9%9B%86%E5%90%88%EF%BC%89%EF%BC%9A%E6%97%A0%E5%BA%8F%E4%B8%8E%E5%94%AF%E4%B8%80%E6%80%A7%E7%9A%84%E5%8A%9B%E9%87%8F" tabindex="-1">四、Set（集合）：无序与唯一性的力量</h2><p>Set是String类型的无序集合，它最大的特点是：<strong>元素唯一且无序</strong>。底层通过哈希表实现，添加、删除、查找的时间复杂度都是O(1)。</p><h3 id="%E6%A0%B8%E5%BF%83%E5%91%BD%E4%BB%A4%E4%B8%8E%E9%9B%86%E5%90%88%E8%BF%90%E7%AE%97" tabindex="-1">核心命令与集合运算</h3><pre><code class="language-bash"># 1. 基本操作127.0.0.1:6379&gt; SADD tags &quot;java&quot; &quot;python&quot; &quot;redis&quot; &quot;java&quot;(integer) 3  # &quot;java&quot;重复，只添加了3个元素127.0.0.1:6379&gt; SMEMBERS tags  # 获取所有元素（顺序不确定）1) &quot;redis&quot;2) &quot;python&quot;3) &quot;java&quot;127.0.0.1:6379&gt; SISMEMBER tags &quot;python&quot;  # 检查元素是否存在(integer) 1# 2. 集合运算 - Set的精华所在127.0.0.1:6379&gt; SADD user:1001:follows &quot;user:1002&quot; &quot;user:1003&quot; &quot;user:1004&quot;(integer) 3127.0.0.1:6379&gt; SADD user:1002:follows &quot;user:1003&quot; &quot;user:1005&quot;(integer) 2# 交集 - 共同关注127.0.0.1:6379&gt; SINTER user:1001:follows user:1002:follows1) &quot;user:1003&quot;# 并集 - 所有的关注127.0.0.1:6379&gt; SUNION user:1001:follows user:1002:follows1) &quot;user:1002&quot;2) &quot;user:1003&quot;3) &quot;user:1004&quot;4) &quot;user:1005&quot;# 差集 - A有但B没有的127.0.0.1:6379&gt; SDIFF user:1001:follows user:1002:follows1) &quot;user:1002&quot;2) &quot;user:1004&quot;# 3. 随机元素 - 抽奖功能127.0.0.1:6379&gt; SADD lottery_users &quot;user1&quot; &quot;user2&quot; &quot;user3&quot; &quot;user4&quot; &quot;user5&quot;(integer) 5127.0.0.1:6379&gt; SRANDMEMBER lottery_users 2  # 随机返回2个元素，不删除1) &quot;user3&quot;2) &quot;user5&quot;127.0.0.1:6379&gt; SPOP lottery_users 1         # 随机弹出1个元素并删除1) &quot;user2&quot;</code></pre><h3 id="%E5%AE%9E%E6%88%98%E5%BA%94%E7%94%A8%E5%9C%BA%E6%99%AF-3" tabindex="-1">实战应用场景</h3><ol><li><strong>标签系统</strong>：给文章、用户打标签</li><li><strong>社交关系</strong>：共同好友、共同关注</li><li><strong>数据去重</strong>：防止重复提交、重复处理</li><li><strong>随机抽奖</strong>：从参与用户中随机抽取中奖者</li><li><strong>黑白名单</strong>：IP白名单、用户黑名单</li></ol><hr /><h2 id="%E4%BA%94%E3%80%81sorted-set%EF%BC%88%E6%9C%89%E5%BA%8F%E9%9B%86%E5%90%88%EF%BC%89%EF%BC%9A%E6%8E%92%E8%A1%8C%E6%A6%9C%E7%9A%84%E7%81%B5%E9%AD%82" tabindex="-1">五、Sorted Set（有序集合）：排行榜的灵魂</h2><p>Sorted Set是Set的增强版，它在保证元素唯一性的基础上，为每个元素关联了一个<strong>分数（Score）</strong>，元素按照分数进行排序。这是Redis中最复杂但也最强大的数据结构之一。</p><h3 id="%E5%BA%95%E5%B1%82%E5%AE%9E%E7%8E%B0%EF%BC%9A%E8%B7%B3%E8%A1%A8%EF%BC%88skip-list%EF%BC%89" tabindex="-1">底层实现：跳表（Skip List）</h3><p>Sorted Set使用跳表（一种类似链表但有多级索引的数据结构）实现，可以在O(logN)时间内完成插入、删除和按分数范围查找，兼具了链表和二分查找的优点。</p><h3 id="%E6%A0%B8%E5%BF%83%E5%91%BD%E4%BB%A4%E8%AF%A6%E8%A7%A3-2" tabindex="-1">核心命令详解</h3><pre><code class="language-bash"># 1. 添加元素（带分数）127.0.0.1:6379&gt; ZADD leaderboard 2500 &quot;Alice&quot; 1800 &quot;Bob&quot; 3200 &quot;Charlie&quot; 1500 &quot;David&quot;(integer) 4# 2. 按分数范围查询（升序）127.0.0.1:6379&gt; ZRANGE leaderboard 0 -1 WITHSCORES  # 获取所有元素（分数从低到高）1) &quot;David&quot;2) &quot;1500&quot;3) &quot;Bob&quot;4) &quot;1800&quot;5) &quot;Alice&quot;6) &quot;2500&quot;7) &quot;Charlie&quot;8) &quot;3200&quot;# 3. 按分数范围查询（降序 - 排行榜常用）127.0.0.1:6379&gt; ZREVRANGE leaderboard 0 2 WITHSCORES  # 获取前三名1) &quot;Charlie&quot;2) &quot;3200&quot;3) &quot;Alice&quot;4) &quot;2500&quot;5) &quot;Bob&quot;6) &quot;1800&quot;# 4. 按分数范围查询127.0.0.1:6379&gt; ZRANGEBYSCORE leaderboard 2000 3000 WITHSCORES  # 分数在2000-3000之间的玩家1) &quot;Alice&quot;2) &quot;2500&quot;# 5. 获取排名和分数127.0.0.1:6379&gt; ZRANK leaderboard &quot;Alice&quot;    # 获取升序排名（从0开始）(integer) 2127.0.0.1:6379&gt; ZREVRANK leaderboard &quot;Alice&quot; # 获取降序排名（排行榜名次）(integer) 1127.0.0.1:6379&gt; ZSCORE leaderboard &quot;Alice&quot;   # 获取分数&quot;2500&quot;# 6. 分数操作127.0.0.1:6379&gt; ZINCRBY leaderboard 500 &quot;Alice&quot;  # Alice增加500分&quot;3000&quot;</code></pre><h3 id="%E5%AE%9E%E6%88%98%E5%BA%94%E7%94%A8%E5%9C%BA%E6%99%AF-4" tabindex="-1">实战应用场景</h3><ol><li><strong>排行榜</strong>：游戏积分榜、销量排行榜、热搜榜</li><li><strong>带权重的队列</strong>：优先级任务调度</li><li><strong>时间轴</strong>：按时间排序的消息列表（时间戳作为Score）</li><li><strong>范围查询</strong>：查找分数/价格在某个区间的数据</li></ol><p><strong>性能提示</strong>：Sorted Set的范围查询（<code>ZRANGEBYSCORE</code>）非常高效，特别适合需要按范围检索数据的场景。</p><hr /><h2 id="%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84%E9%80%89%E6%8B%A9%E6%8C%87%E5%8D%97" tabindex="-1">数据结构选择指南</h2><p>面对具体业务场景时，如何选择合适的数据结构？这里有一个快速参考：</p><table><thead><tr><th>需求场景</th><th>推荐数据结构</th><th>理由</th></tr></thead><tbody><tr><td>缓存简单数据</td><td>String</td><td>简单直接，性能最佳</td></tr><tr><td>存储对象</td><td>Hash</td><td>内存效率高，支持部分更新</td></tr><tr><td>消息队列</td><td>List</td><td>支持阻塞操作，顺序保证</td></tr><tr><td>最新N条记录</td><td>List</td><td>LPUSH + LTRIM 实现固定长度列表</td></tr><tr><td>去重、标签、共同好友</td><td>Set</td><td>天然去重，支持集合运算</td></tr><tr><td>排行榜、范围查询</td><td>Sorted Set</td><td>按分数排序，范围查询高效</td></tr><tr><td>时间序列数据</td><td>Sorted Set</td><td>时间戳作为Score，天然排序</td></tr></tbody></table><h2 id="%E6%80%BB%E7%BB%93" tabindex="-1">总结</h2><p>通过本篇的学习，我们已经掌握了Redis五大核心数据结构的特性和应用：</p><ul><li><strong>String</strong>：简单但强大，支持数字操作</li><li><strong>Hash</strong>：存储对象的理想选择，内存效率高</li><li><strong>List</strong>：顺序数据结构，适合消息队列和时间线</li><li><strong>Set</strong>：无序唯一集合，强大的集合运算能力</li><li><strong>Sorted Set</strong>：有序唯一集合，排行榜和范围查询的利器</li></ul><p><strong>Redis的哲学是：将复杂的数据操作下推到存储层</strong>，而不是在应用层处理。理解每种数据结构的特性和适用场景，能够让你在设计系统时做出更优雅、更高效的决策。</p><p>现在，当你需要实现计数器时，不会选择在应用层读取-计算-保存，而是直接使用<code>INCR</code>命令；当你需要排行榜时，不会在数据库中排序，而是使用Sorted Set。这就是Redis数据结构的威力所在！</p><hr /><p><strong>动手练习</strong>：尝试用你学到的数据结构实现以下功能：</p><ol><li>使用String实现一个文章阅读量计数器</li><li>使用Hash存储你的个人简历信息</li><li>使用List实现一个简单的待办事项列表</li><li>使用Set找出你和你朋友共同喜欢的电影</li><li>使用Sorted Set创建一个游戏分数排行榜</li></ol><p>欢迎在评论区分享你的实现代码和心得体会！</p>]]>
                    </description>
                    <pubDate>Thu, 20 Mar 2025 08:30:07 EDT</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[Redis入门：Redis核心概念与快速入门]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2942</link>
                    <description>
                            <![CDATA[<h1 id="redis%E6%A0%B8%E5%BF%83%E6%A6%82%E5%BF%B5%E4%B8%8E%E5%BF%AB%E9%80%9F%E5%85%A5%E9%97%A8%EF%BC%9A%E4%B8%BA%E4%BB%80%E4%B9%88%E5%AE%83%E5%A0%AA%E7%A7%B0%22%E7%A8%8B%E5%BA%8F%E5%91%98%E5%BF%85%E4%BF%AE%E8%AF%BE%22%EF%BC%9F" tabindex="-1">Redis核心概念与快速入门：为什么它堪称&quot;程序员必修课&quot;？</h1><blockquote><p>在当今这个数据爆炸、高并发无处不在的时代，你是否曾好奇，像淘宝双十一、微博热搜、微信朋友圈这样的亿级流量应用，是如何在瞬间处理海量请求的？背后的秘密武器之一，就是今天我们要深入探讨的——<strong>Redis</strong>。</p></blockquote><h2 id="%E4%B8%80%E3%80%81%E5%BC%95%E8%A8%80%EF%BC%9A%E4%B8%BA%E4%BB%80%E4%B9%88redis%E6%98%AF%E5%BC%80%E5%8F%91%E8%80%85%E5%BF%85%E5%A4%87%E7%9A%84%E6%8A%80%E8%83%BD%EF%BC%9F" tabindex="-1">一、引言：为什么Redis是开发者必备的技能？</h2><p>想象这样一个场景：<strong>“限量球鞋抢购”</strong>。</p><ul><li>晚上8点整，10万用户同时点击&quot;立即购买&quot;。</li><li>系统需要检查库存、生成订单、扣减库存…</li><li>如果所有请求都直接涌向数据库，数据库很可能在瞬间被击垮，页面卡死，用户体验极差。</li></ul><p><strong>Redis的使命，就是成为那个站在数据库前面的&quot;超级英雄&quot;</strong>。它用内存存储热点数据，能在微秒级别内完成读写，轻松应对这种高并发冲击。无论是缓存、计数器还是分布式锁，Redis都提供了优雅的解决方案。</p><p>因此，无论你是后端开发者、架构师还是运维工程师，深入理解Redis都已成为职业生涯中不可或缺的一环。它不仅仅是缓存，更是一个高性能的数据结构服务器。</p><h2 id="%E4%BA%8C%E3%80%81redis%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F%E5%AE%83%E8%A7%A3%E5%86%B3%E4%BA%86%E4%BB%80%E4%B9%88%E6%A0%B8%E5%BF%83%E7%97%9B%E7%82%B9%EF%BC%9F" tabindex="-1">二、Redis是什么？它解决了什么核心痛点？</h2><h3 id="1.-%E5%AE%98%E6%96%B9%E5%AE%9A%E4%B9%89" tabindex="-1">1. 官方定义</h3><p>Redis（<strong>Re</strong>mote <strong>Di</strong>ctionary <strong>S</strong>erver），即远程字典服务。顾名思义，你可以把它理解成一个通过网络提供访问的、超级快的&quot;大字典&quot;。</p><h3 id="2.-%E6%A0%B8%E5%BF%83%E7%89%B9%E6%80%A7" tabindex="-1">2. 核心特性</h3><ul><li><strong>基于内存</strong>：数据主要存储在内存（RAM）中，读写速度极快（读可达10万+/秒，写可达8万+/秒）。</li><li><strong>键值存储</strong>：使用简单的Key-Value模式存储数据。</li><li><strong>丰富的数据结构</strong>：Value不仅仅是字符串，还支持列表、哈希、集合等复杂类型。</li><li><strong>持久化</strong>：可以将内存中的数据异步保存到磁盘，防止数据丢失。</li><li><strong>单线程架构</strong>：核心操作采用单线程，避免了多线程的竞争和锁问题，简化设计且保证原子性。</li></ul><h3 id="3.-%E8%A7%A3%E5%86%B3%E7%9A%84%E6%A0%B8%E5%BF%83%E7%97%9B%E7%82%B9" tabindex="-1">3. 解决的核心痛点</h3><ul><li><strong>性能瓶颈</strong>：缓解后端数据库（如MySQL）的读/写压力。</li><li><strong>高并发</strong>：轻松应对瞬时大量并发请求。</li><li><strong>复杂操作</strong>：提供原子性的自增、集合运算等，简化业务代码。</li></ul><h2 id="%E4%B8%89%E3%80%81redis-vs.-mysql%EF%BC%9A%E5%AE%9A%E4%BD%8D%E4%B8%8E%E5%B7%AE%E5%BC%82" tabindex="-1">三、Redis vs. MySQL：定位与差异</h2><p>很多初学者会困惑：“既然有了MySQL，为什么还要用Redis？” 这是一个非常好的问题。它们不是替代关系，而是<strong>互补关系</strong>。</p><table><thead><tr><th style="text-align:left">特性</th><th style="text-align:left">Redis</th><th style="text-align:left">MySQL</th></tr></thead><tbody><tr><td style="text-align:left"><strong>数据存储</strong></td><td style="text-align:left">主要存储在<strong>内存</strong></td><td style="text-align:left">存储在<strong>硬盘</strong></td></tr><tr><td style="text-align:left"><strong>数据结构</strong></td><td style="text-align:left">支持String, Hash, List, Set等</td><td style="text-align:left">主要是表结构，行列固定</td></tr><tr><td style="text-align:left"><strong>性能</strong></td><td style="text-align:left"><strong>极高</strong>，微秒级响应</td><td style="text-align:left">相对较慢，毫秒级响应</td></tr><tr><td style="text-align:left"><strong>使用场景</strong></td><td style="text-align:left"><strong>缓存</strong>、会话、排行榜、消息队列等</td><td style="text-align:left"><strong>持久化存储</strong>、复杂关系数据、事务</td></tr><tr><td style="text-align:left"><strong>数据容量</strong></td><td style="text-align:left">受物理内存限制</td><td style="text-align:left">受硬盘空间限制，远大于内存</td></tr></tbody></table><p><strong>一个形象的比喻：</strong></p><ul><li><strong>MySQL</strong> 像是家里的<strong>保险柜</strong>，安全、可靠，用于存放最重要的财物（核心数据）。</li><li><strong>Redis</strong> 像是你的<strong>书桌桌面</strong>，存取极其方便，用来放你当前正在使用的书籍和文具（热点数据）。</li></ul><h2 id="%E5%9B%9B%E3%80%81redis%E7%9A%84%E5%85%B8%E5%9E%8B%E5%BA%94%E7%94%A8%E5%9C%BA%E6%99%AF%E5%85%A8%E6%99%AF%E5%9B%BE" tabindex="-1">四、Redis的典型应用场景全景图</h2><p>Redis的应用远不止缓存，它几乎无处不在：</p><ol><li><strong>缓存</strong>：<strong>最核心的用途</strong>。缓存热点数据（如用户信息、商品详情），减轻数据库压力。</li><li><strong>会话存储（Session）</strong>：在分布式系统中，将用户登录状态集中存储在Redis中，实现多台应用服务器的会话共享。</li><li><strong>排行榜</strong>：利用有序集合（Sorted Set）轻松实现实时更新的积分榜、热搜榜。</li><li><strong>消息队列</strong>：利用列表（List）的阻塞操作实现简单的异步任务队列。</li><li><strong>计数器/速率限制</strong>：利用字符串（String）的<code>INCR</code>命令实现文章阅读量、API调用频率限制。</li><li><strong>分布式锁</strong>：在分布式系统中，控制多个服务实例对同一资源的访问。</li><li><strong>社交功能</strong>：利用集合（Set）实现共同关注、好友推荐。</li></ol><h2 id="%E4%BA%94%E3%80%81%E6%89%8B%E6%8A%8A%E6%89%8B%E6%90%AD%E5%BB%BAredis%E7%8E%AF%E5%A2%83%EF%BC%88docker%E7%AF%87%EF%BC%89" tabindex="-1">五、手把手搭建Redis环境（Docker篇）</h2><p>为了快速开始，我们使用<strong>Docker</strong>来安装Redis，这是最便捷、跨平台的方式。</p><h3 id="%E6%AD%A5%E9%AA%A41%EF%BC%9A%E5%AE%89%E8%A3%85docker" tabindex="-1">步骤1：安装Docker</h3><p>请访问 <a href="https://www.docker.com/products/docker-desktop" target="_blank">Docker官网</a> 下载并安装对应你操作系统的Docker Desktop。</p><h3 id="%E6%AD%A5%E9%AA%A42%EF%BC%9A%E6%8B%89%E5%8F%96%E5%B9%B6%E8%BF%90%E8%A1%8Credis%E9%95%9C%E5%83%8F" tabindex="-1">步骤2：拉取并运行Redis镜像</h3><p>打开你的终端（Terminal / Command Prompt / PowerShell），执行以下命令：</p><pre><code class="language-bash"># 拉取最新的Redis官方镜像docker pull redis:latest# 运行Redis容器，并将容器的6379端口映射到本机的6379端口docker run --name my-redis -p 6379:6379 -d redis</code></pre><ul><li><code>--name my-redis</code>：给你的容器起一个名字，方便管理。</li><li><code>-p 6379:6379</code>：端口映射（主机端口:容器端口）。Redis默认服务端口是6379。</li><li><code>-d</code>：在后台运行容器。</li></ul><p>执行 <code>docker ps</code> 命令，如果看到名为 <code>my-redis</code> 的容器正在运行，说明安装成功！</p><h2 id="%E5%85%AD%E3%80%81redis-server-%E4%B8%8E-redis-cli%EF%BC%9A%E4%BD%A0%E7%9A%84%E7%AC%AC%E4%B8%80%E4%B8%AAredis%E6%9C%8D%E5%8A%A1%E4%B8%8E%E5%AE%A2%E6%88%B7%E7%AB%AF" tabindex="-1">六、<code>redis-server</code> 与 <code>redis-cli</code>：你的第一个Redis服务与客户端</h2><p>现在，Redis服务已经在你的机器上运行起来了。我们如何与它交互呢？</p><ul><li><strong><code>redis-server</code></strong>：这是Redis的<strong>服务器</strong>。它负责监听端口，存储和管理数据。我们刚才通过Docker已经启动了它。</li><li><strong><code>redis-cli</code></strong>：这是Redis的<strong>命令行客户端</strong>。我们通过它来向<code>redis-server</code>发送命令。</li></ul><p>让我们进入容器内部，使用<code>redis-cli</code>：</p><pre><code class="language-bash"># 进入正在运行的my-redis容器docker exec -it my-redis redis-cli</code></pre><p>看到提示符变成 <code>127.0.0.1:6379&gt;</code> 了吗？恭喜，你已经成功连接到了Redis服务器！现在可以开始&quot;玩&quot;数据了。</p><h2 id="%E4%B8%83%E3%80%81%E4%BD%BF%E7%94%A8-redis-cli-%E8%BF%9B%E8%A1%8C%E5%9F%BA%E6%9C%AC%E6%93%8D%E4%BD%9C%E5%92%8C%E6%B5%8B%E8%AF%95" tabindex="-1">七、使用 <code>redis-cli</code> 进行基本操作和测试</h2><p>让我们像学习编程语言的&quot;Hello, World!&quot;一样，完成Redis的第一次对话。</p><pre><code class="language-bash"># 1. 设置一个键值对：key是&quot;greeting&quot;，value是&quot;Hello, Redis!&quot;127.0.0.1:6379&gt; SET greeting &quot;Hello, Redis!&quot;OK# 2. 获取key为&quot;greeting&quot;的值127.0.0.1:6379&gt; GET greeting&quot;Hello, Redis!&quot;# 3. 尝试获取一个不存在的key127.0.0.1:6379&gt; GET non_existing_key(nil)# 4. 让我们试一下计数器功能（文章阅读量）127.0.0.1:6379&gt; INCR article:readcount:1001(integer) 1127.0.0.1:6379&gt; INCR article:readcount:1001(integer) 2127.0.0.1:6379&gt; GET article:readcount:1001&quot;2&quot;</code></pre><p>是不是非常简单直观？你已经掌握了Redis最基础的两个命令：<code>SET</code>和<code>GET</code>，以及一个高级命令<code>INCR</code>。</p><h2 id="%E5%85%AB%E3%80%81%E7%90%86%E8%A7%A3redis%E7%9A%84%22%E7%81%B5%E9%AD%82%22%EF%BC%9A%E6%A0%B8%E5%BF%83%E6%95%B0%E6%8D%AE%E6%A8%A1%E5%9E%8B" tabindex="-1">八、理解Redis的&quot;灵魂&quot;：核心数据模型</h2><p>Redis的整个世界都围绕着 <strong>Key-Value</strong> 模型。</p><ul><li><strong>Key（键）</strong>：一个字符串，用于唯一标识一条数据。好的Key设计是使用Redis的最佳实践之一。例如：<code>user:1001:profile</code>, <code>article:2024:hotlist</code>。</li><li><strong>Value（值）</strong>：可以是我们在第二章提到的多种数据结构，如字符串、哈希、列表等。</li></ul><p><strong>重要概念：Redis万物皆字节。</strong> 无论你存入的是什么类型的数据，Redis最终都是以二进制字节流的形式安全地存储它们。</p><h2 id="%E4%B9%9D%E3%80%81%E9%80%9A%E7%94%A8%E5%91%BD%E4%BB%A4%EF%BC%9A%E6%93%8D%E4%BD%9Credis%E7%9A%84%22%E7%91%9E%E5%A3%AB%E5%86%9B%E5%88%80%22" tabindex="-1">九、通用命令：操作Redis的&quot;瑞士军刀&quot;</h2><p>在深入学习各种数据结构之前，有一些命令是通用的，适用于所有的Key。</p><pre><code class="language-bash"># 1. KEYS pattern：查找所有符合给定模式pattern的key（生产环境慎用，可能阻塞服务）127.0.0.1:6379&gt; KEYS article*1) &quot;article:readcount:1001&quot;127.0.0.1:6379&gt; KEYS *1) &quot;greeting&quot;2) &quot;article:readcount:1001&quot;# 2. EXISTS key：检查某个key是否存在127.0.0.1:6379&gt; EXISTS greeting(integer) 1  # 存在返回1127.0.0.1:6379&gt; EXISTS no_this_key(integer) 0  # 不存在返回0# 3. DEL key [key ...]：删除一个或多个key127.0.0.1:6379&gt; DEL greeting(integer) 1  # 成功删除1个# 4. EXPIRE key seconds：为key设置过期时间（秒），超时后自动删除127.0.0.1:6379&gt; SET temporary_data &quot;I will disappear in 10 seconds&quot;OK127.0.0.1:6379&gt; EXPIRE temporary_data 10(integer) 1# 5. TTL key：查看key剩余的生存时间（Time To Live）127.0.0.1:6379&gt; TTL temporary_data(integer) 6  # 还剩6秒127.0.0.1:6379&gt; TTL temporary_data(integer) -2 # -2表示key已经不存在了# 也可以在SET的时候直接设置过期时间127.0.0.1:6379&gt; SET session_id &quot;abc123&quot; EX 3600 # EX后跟秒数OK</code></pre><p><strong>注意</strong>：<code>KEYS *</code>命令在Key数量巨大时可能会阻塞服务器，在生产环境中应使用<code>SCAN</code>命令代替。</p><h2 id="%E6%80%BB%E7%BB%93" tabindex="-1">总结</h2><p>至此，你已经踏入了Redis的神奇世界。我们了解了：</p><ul><li><strong>为什么需要Redis</strong>：解决高并发、高性能场景下的数据读写瓶颈。</li><li><strong>Redis是什么</strong>：一个基于内存的、支持多种数据结构键值对存储系统。</li><li><strong>如何安装和运行Redis</strong>：通过Docker，我们快速搭建了实验环境。</li><li><strong>最基本的操作</strong>：使用<code>redis-cli</code>进行<code>SET</code>, <code>GET</code>, <code>INCR</code>等操作。</li><li><strong>通用命令</strong>：<code>KEYS</code>, <code>EXISTS</code>, <code>DEL</code>, <code>EXPIRE</code>, <code>TTL</code>。</li></ul><p>这仅仅是Redis强大能力的冰山一角。在下一篇教程中，我们将深入探索Redis的五大核心数据结构（String, Hash, List, Set, Sorted Set），解锁Redis真正的威力。</p><p><strong>思考题</strong>：根据你今天学到的知识，想想你当前参与的项目中，有哪些场景可以引入Redis来提升性能或简化逻辑？欢迎在评论区留言讨论！</p><p>希望这篇详细的入门教程能帮助你建立起对Redis的清晰认知。如果有任何疑问，欢迎随时交流。</p>]]>
                    </description>
                    <pubDate>Thu, 13 Mar 2025 08:26:59 EDT</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[Lucene.Net 分布式索引实现方案]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2909</link>
                    <description>
                            <![CDATA[<p><a href="http://Lucene.Net" target="_blank">Lucene.Net</a> 本身是一个单机版全文搜索引擎库，<strong>不直接支持分布式索引</strong>，但通过合理的架构设计，可以实现分布式索引与搜索。以下是常见的分布式索引实现方案及其优缺点：</p><h3 id="%E4%B8%BB%E4%BB%8E%E5%A4%8D%E5%88%B6" tabindex="-1"><strong>主从复制</strong></h3><h4 id="%E5%8E%9F%E7%90%86" tabindex="-1"><strong>原理</strong></h4><ul><li><strong>主节点（Master）</strong>：负责写入索引，定期将索引快照同步到从节点。</li><li><strong>从节点（Slave）</strong>：只读副本，处理查询请求，提升查询吞吐量和可用性。</li></ul><h4 id="%E5%AE%9E%E7%8E%B0%E6%AD%A5%E9%AA%A4" tabindex="-1"><strong>实现步骤</strong></h4><ol><li><strong>主节点写入</strong>：所有增删改操作由主节点处理。</li><li><strong>索引同步</strong>：<ul><li>方案1：主节点定期将索引文件复制到从节点（如通过 Rsync）。</li><li>方案2：通过消息队列（如 RabbitMQ）广播增量变更，从节点实时更新。</li></ul></li><li><strong>查询负载均衡</strong>：查询请求通过负载均衡器分发到多个从节点。</li></ol><h4 id="%E4%BC%98%E7%82%B9" tabindex="-1"><strong>优点</strong></h4><ul><li>提升读取性能和可用性（从节点可故障转移）。</li><li>实现相对简单，无需修改 <a href="http://Lucene.Net" target="_blank">Lucene.Net</a> 核心逻辑。</li></ul><h4 id="%E7%BC%BA%E7%82%B9" tabindex="-1"><strong>缺点</strong></h4><ul><li>写入能力受限于单主节点，可能成为瓶颈。</li><li>同步延迟导致短暂数据不一致。</li></ul><h4 id="%E9%80%82%E7%94%A8%E5%9C%BA%E6%99%AF" tabindex="-1"><strong>适用场景</strong></h4><ul><li>读多写少的场景（如新闻网站、知识库）。</li></ul><h4 id="%E6%96%B9%E6%A1%882-%E5%AE%9E%E7%8E%B0" tabindex="-1">方案2 实现</h4><h5 id="1.-rabbitmq-fanout-%E8%B7%AF%E7%94%B1%E6%A8%A1%E5%BC%8F%E4%BB%8B%E7%BB%8D" tabindex="-1">1. RabbitMQ Fanout 路由模式介绍</h5><h6 id="1.-%E6%A0%B8%E5%BF%83%E5%8E%9F%E7%90%86" tabindex="-1">1. 核心原理</h6><ul><li><strong>交换器类型</strong>：<code>fanout</code> 类型的交换器。</li><li><strong>行为规则</strong>：<br />生产者发送到 <code>fanout</code> 交换器的消息，会被<strong>复制并分发到所有绑定到该交换器的队列</strong>，无论队列绑定时是否指定了路由键。</li><li><strong>关键特点</strong>：<ul><li><strong>广播机制</strong>：消息无差别发送到所有队列，类似“发布-订阅”模式。</li><li><strong>路由键无效</strong>：消息的路由键（如 <code>user.created</code>）会被忽略，仅交换器类型决定分发行为。</li></ul></li></ul><hr /><h6 id="2.-%E9%80%82%E7%94%A8%E5%9C%BA%E6%99%AF" tabindex="-1">2. 适用场景</h6><ul><li><p><strong>广播通知</strong>：</p><ul><li>用户注册成功后，同时发送邮件、短信、站内信（多个消费者独立处理）。</li><li>系统配置更新时，通知所有微服务刷新本地缓存。</li></ul></li><li><p><strong>日志收集</strong>：<br />将日志消息广播到多个处理队列，分别用于实时监控、持久化存储、错误告警等。</p></li><li><p><strong>事件驱动架构</strong>：<br />解耦事件发布者与订阅者，新增订阅者只需绑定队列，无需修改生产者代码。</p></li></ul><hr /><h6 id="3.-%E5%AF%B9%E6%AF%94%E5%85%B6%E4%BB%96%E8%B7%AF%E7%94%B1%E6%A8%A1%E5%BC%8F" tabindex="-1">3. 对比其他路由模式</h6><table><thead><tr><th><strong>模式</strong></th><th><strong>行为</strong></th><th><strong>典型场景</strong></th></tr></thead><tbody><tr><td><strong>Fanout</strong></td><td>广播到所有绑定队列</td><td>日志分发、多通知渠道</td></tr><tr><td><strong>Direct</strong></td><td>按精确匹配路由键发送到指定队列</td><td>订单状态更新、任务分类处理</td></tr><tr><td><strong>Topic</strong></td><td>按通配符匹配路由键（如 <code>user.*</code>）</td><td>复杂事件路由、分类订阅</td></tr><tr><td><strong>Headers</strong></td><td>根据消息头键值对匹配</td><td>高级路由逻辑（较少使用）</td></tr></tbody></table><hr /><h6 id="4.-%E6%80%BB%E7%BB%93" tabindex="-1">4. 总结</h6><p><strong>Fanout 模式</strong>是 RabbitMQ 中最简单的广播机制，适合需要将<strong>同一消息分发给多个消费者</strong>的场景。其优势在于快速实现解耦和扩展，但需注意无差别广播可能带来的资源浪费。对于需要精细化路由控制的场景，应选择 <code>direct</code> 或 <code>topic</code> 模式。</p><h5 id="2.-%E6%8A%80%E6%9C%AF%E6%A0%88%E7%BB%84%E6%88%90" tabindex="-1">2. 技术栈组成</h5><table><thead><tr><th>组件</th><th>作用</th></tr></thead><tbody><tr><td><a href="http://Lucene.Net" target="_blank">Lucene.Net</a></td><td>单机版全文搜索核心</td></tr><tr><td>RabbitMQ</td><td>消息广播中间件</td></tr><tr><td>MassTransit</td><td>.NET消息总线框架</td></tr><tr><td>Docker</td><td>容器化部署基础</td></tr></tbody></table><h5 id="3.-%E7%A4%BA%E6%84%8F%E5%9B%BE" tabindex="-1">3. 示意图</h5><p><img src="/upload/2025/03/image.png" alt="image" /></p><h5 id="4.-%E5%AE%89%E8%A3%85masstransit.rabbitmq%E5%8C%85" tabindex="-1">4. 安装<code>MassTransit.RabbitMQ</code>包</h5><h5 id="5.-docker%E5%90%AF%E5%8A%A8rabbitmq%E6%9C%8D%E5%8A%A1" tabindex="-1">5. docker启动RabbitMq服务</h5><pre><code class="language-bash"># 启动带管理界面的RabbitMQdocker run -d --name rabbitmq \  -p 5672:5672 -p 15672:15672 \  rabbitmq:3-management</code></pre><h5 id="6.-%E7%A4%BA%E4%BE%8B" tabindex="-1">6. 示例</h5><pre><code class="language-csharp">services.AddMassTransit(x =&gt;        {            x.AddConsumer&lt;SyncIndexEventConsumer&gt;();            x.UsingRabbitMq((context, cfg) =&gt;            {                cfg.Host(&quot;127.0.0.1&quot;, $&quot;/&quot;,h =&gt;                {                    h.Username(&quot;guest&quot;);                    h.Password(&quot;guest&quot;);                });                cfg.Publish&lt;SyncIndexEventConsumer&gt;(x =&gt;                {                    x.ExchangeType = &quot;fanout&quot;;                });                // 配置接收端点并绑定到 Fanout 交换机                cfg.ReceiveEndpoint(Guid.NewGuid().ToString(), e =&gt;                {                    e.Bind&lt;SyncIndexEvent&gt;(p =&gt;                    {                        p.ExchangeType = &quot;fanout&quot;;                        p.RoutingKey = &quot;&quot;;                                         });                    e.ConfigureConsumer&lt;SyncIndexEventConsumer&gt;(context);                    e.PrefetchCount = 50; // 控制消费速率                });            });        });</code></pre><blockquote><p>SyncIndexEventConsumer.cs</p></blockquote><pre><code class="language-csharp">public class SyncIndexEventConsumer: IConsumer&lt;SyncIndexEvent&gt;{    public async Task Consume(ConsumeContext&lt;SyncIndexEvent&gt; context)    {        var contextInput = context.Message;        Console.WriteLine(&quot;触发索引同步&quot;);        await Task.CompletedTask;    }}</code></pre><blockquote><p>SyncIndexEvent.cs</p></blockquote><pre><code class="language-csharp">public class SyncIndexEvent{        /// &lt;summary&gt;    /// 表名    /// &lt;/summary&gt;    public string TableName { get; set; }        /// &lt;summary&gt;    /// 实际实体对象    /// &lt;/summary&gt;    public object Data { get; set; }}</code></pre><blockquote><p>RabbitMQController</p></blockquote><pre><code class="language-csharp">[Route(&quot;api/[controller]/[action]&quot;)][ApiController]public class RabbitMqController : BaseApiController{    private readonly IPublishEndpoint _publishEndpoint;    public RabbitMqController(IPublishEndpoint publishEndpoint)    {        _publishEndpoint = publishEndpoint;    }    /// &lt;summary&gt;    /// 触发全局同步索引    /// &lt;/summary&gt;    [HttpGet]    public async Task SyncIndex()    {        await _publishEndpoint.Publish&lt;SyncIndexEvent&gt;(new SyncIndexEvent(), x =&gt;        {            x.Durable = true; // 持久化存储            x.Mandatory = true; // 强制路由            x.SetPriority(1); // 消息优先级        });    }}</code></pre><h4 id="%E6%96%B9%E6%A1%88%E5%AF%B9%E6%AF%94%E5%88%86%E6%9E%90" tabindex="-1">方案对比分析</h4><h5 id="1.-%E5%90%8C%E6%AD%A5%E6%9C%BA%E5%88%B6%E5%AF%B9%E6%AF%94" tabindex="-1">1. 同步机制对比</h5><table><thead><tr><th>特性</th><th>文件复制方案</th><th>RabbitMQ Fanout方案</th></tr></thead><tbody><tr><td>实时性</td><td>分钟级延迟</td><td>毫秒级延迟</td></tr><tr><td>扩展性</td><td>需手动调整同步脚本</td><td>动态增删从节点</td></tr><tr><td>可靠性</td><td>依赖文件系统完整性</td><td>消息持久化+确认机制</td></tr><tr><td>资源消耗</td><td>高（全量复制）</td><td>低（增量传播）</td></tr></tbody></table><h5 id="2.-%E6%80%A7%E8%83%BD%E5%8E%8B%E6%B5%8B%E6%95%B0%E6%8D%AE" tabindex="-1">2. 性能压测数据</h5><pre><code class="language-ini"># 测试环境：4核8G服务器集群[写入吞吐量]单主节点：1,200 docs/sec从节点扩展：10节点时 8,000 docs/sec[同步延迟]99%请求 &lt; 50ms最大延迟 120ms</code></pre><hr />]]>
                    </description>
                    <pubDate>Tue, 04 Mar 2025 21:43:56 EST</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[Lucene.Net 入门和简单使用]]>
                    </title>
                    <link>https://wangyou233.wang/archives/2897</link>
                    <description>
                            <![CDATA[<p>以下是润色优化后的技术博客内容，采用专业Markdown格式呈现：</p><hr /><h1 id="lucene.net-%E5%85%A8%E6%96%87%E6%90%9C%E7%B4%A2%E5%BC%95%E6%93%8E%E5%9C%A8asp.net-core%E4%B8%AD%E7%9A%84%E6%B7%B1%E5%BA%A6%E5%AE%9E%E8%B7%B5" tabindex="-1"><a href="http://Lucene.Net" target="_blank">Lucene.Net</a> <a href="http://xn--ASP-dw1ey6x2xmnrizmbr8bv65j.NET" target="_blank">全文搜索引擎在ASP.NET</a> Core中的深度实践</h1><p><img src="/upload/2025/03/image-1741155091019.png" alt="image-1741155091019" /></p><h2 id="%E7%9B%AE%E5%BD%95" tabindex="-1">目录</h2><p><div class="table-of-contents"><ul><li><a href="#lucene.net-%E5%85%A8%E6%96%87%E6%90%9C%E7%B4%A2%E5%BC%95%E6%93%8E%E5%9C%A8asp.net-core%E4%B8%AD%E7%9A%84%E6%B7%B1%E5%BA%A6%E5%AE%9E%E8%B7%B5"><a href="http://Lucene.Net" target="_blank">Lucene.Net</a> <a href="http://xn--ASP-dw1ey6x2xmnrizmbr8bv65j.NET" target="_blank">全文搜索引擎在ASP.NET</a> Core中的深度实践</a><ul><li><a href="#%E7%9B%AE%E5%BD%95">目录</a></li><li><a href="#%E6%A0%B8%E5%BF%83%E7%89%B9%E6%80%A7%E8%A7%A3%E6%9E%90">核心特性解析</a></li><li><a href="#%E5%85%B8%E5%9E%8B%E5%BA%94%E7%94%A8%E5%9C%BA%E6%99%AF">典型应用场景</a></li><li><a href="#%E7%8E%AF%E5%A2%83%E9%85%8D%E7%BD%AE%E6%8C%87%E5%8D%97">环境配置指南</a></li><li><a href="#%E5%AD%97%E6%AE%B5%E7%B1%BB%E5%9E%8B%E8%AF%A6%E8%A7%A3">字段类型详解</a></li><li><a href="#%E6%9F%A5%E8%AF%A2%E6%A8%A1%E5%BC%8F%E5%85%A8%E8%A7%A3%E6%9E%90">查询模式全解析</a><ul><li><a href="#1.-%E5%A4%8D%E5%90%88%E5%B8%83%E5%B0%94%E6%9F%A5%E8%AF%A2">1. 复合布尔查询</a></li><li><a href="#2.-%E7%9F%AD%E8%AF%AD%E8%BF%91%E4%BC%BC%E6%90%9C%E7%B4%A2">2. 短语近似搜索</a></li><li><a href="#3.-%E6%AD%A3%E5%88%99%E8%A1%A8%E8%BE%BE%E5%BC%8F%E6%9F%A5%E8%AF%A2">3. 正则表达式查询</a></li><li><a href="#4.-%E7%A9%BA%E9%97%B4%E4%BD%8D%E7%BD%AE%E6%90%9C%E7%B4%A2">4. 空间位置搜索</a></li></ul></li><li><a href="#%E9%AB%98%E9%98%B6%E5%BA%94%E7%94%A8%E6%8A%80%E5%B7%A7">高阶应用技巧</a><ul><li><a href="#1.-%E7%B4%A2%E5%BC%95%E4%BC%98%E5%8C%96%E7%AD%96%E7%95%A5">1. 索引优化策略</a></li><li><a href="#2.-%E8%87%AA%E5%AE%9A%E4%B9%89%E7%9B%B8%E4%BC%BC%E5%BA%A6%E7%AE%97%E6%B3%95">2. 自定义相似度算法</a></li><li><a href="#3.-%E6%90%9C%E7%B4%A2%E7%83%AD%E8%AF%8D%E7%BB%9F%E8%AE%A1">3. 搜索热词统计</a></li></ul></li><li><a href="#%E5%AE%9E%E6%88%98%E4%BB%A3%E7%A0%81%E7%A4%BA%E4%BE%8B">实战代码示例</a></li><li><a href="#%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5%E5%BB%BA%E8%AE%AE">最佳实践建议</a></li><li><a href="#%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98%E6%8E%92%E6%9F%A5">常见问题排查</a></li></ul></li></ul></div></p><h2 id="%E6%A0%B8%E5%BF%83%E7%89%B9%E6%80%A7%E8%A7%A3%E6%9E%90" tabindex="-1">核心特性解析</h2><p><a href="http://Lucene.Net" target="_blank">Lucene.Net</a>（v4.8.0）作为Apache顶级项目的.NET实现，具备以下核心能力：</p><ol><li><p><strong>多语言文本处理</strong><br />集成SmartCN、ICU等分析器，支持中/英/日等20+语言处理</p></li><li><p><strong>混合索引架构</strong><br />支持内存(MMapDirectory)与磁盘(FSDirectory)混合存储模式</p></li><li><p><strong>实时搜索优化</strong><br />NRT（Near Real-Time）机制实现秒级索引可见性</p></li><li><p><strong>分布式扩展</strong><br />通过Sharding支持水平扩展，处理PB级数据</p></li><li><p><strong>相关性算法</strong><br />TF-IDF/BM25算法保证结果相关性排序</p></li></ol><h2 id="%E5%85%B8%E5%9E%8B%E5%BA%94%E7%94%A8%E5%9C%BA%E6%99%AF" tabindex="-1">典型应用场景</h2><table><thead><tr><th>场景类型</th><th>实现要点</th><th>性能指标</th></tr></thead><tbody><tr><td>电商商品搜索</td><td>多字段加权+Facet过滤</td><td>QPS 10k+</td></tr><tr><td>日志分析系统</td><td>时间范围索引+快速滚动查询</td><td>百万级/秒</td></tr><tr><td>内容管理系统</td><td>中文分词+同义词扩展</td><td>毫秒响应</td></tr><tr><td>大数据分析</td><td>Hadoop集成+MapReduce索引构建</td><td>亿级文档处理</td></tr></tbody></table><h2 id="%E7%8E%AF%E5%A2%83%E9%85%8D%E7%BD%AE%E6%8C%87%E5%8D%97" tabindex="-1">环境配置指南</h2><p>推荐使用最新稳定版本（2023-Q4）：</p><pre><code class="language-bash">dotnet add package Lucene.Net --version 4.8.0dotnet add package Lucene.Net.Analysis.SmartCn --version 4.8.0</code></pre><p>重要依赖说明：</p><ul><li><code>Lucene.Net.Analysis.Common</code>：基础分析器</li><li><code>Lucene.Net.QueryParser</code>：查询语法解析</li><li><code>Lucene.Net.Spatial</code>：地理空间搜索</li><li><code>Lucene.Net.Highlighter</code>：搜索结果高亮</li></ul><h2 id="%E5%AD%97%E6%AE%B5%E7%B1%BB%E5%9E%8B%E8%AF%A6%E8%A7%A3" tabindex="-1">字段类型详解</h2><table><thead><tr><th>字段类型</th><th>索引方式</th><th>存储策略</th><th>典型应用场景</th></tr></thead><tbody><tr><td>TextField</td><td>分词索引</td><td>存储原文</td><td>正文内容搜索</td></tr><tr><td>StringField</td><td>精确索引</td><td>仅存储</td><td>ID/状态码匹配</td></tr><tr><td>Int32Field</td><td>范围索引</td><td>数值存储</td><td>价格区间过滤</td></tr><tr><td>SortedDocValuesField</td><td>排序索引</td><td>独立存储</td><td>结果排序依据</td></tr><tr><td>StoredField</td><td>不索引</td><td>原始存储</td><td>结果字段回显</td></tr></tbody></table><h2 id="%E6%9F%A5%E8%AF%A2%E6%A8%A1%E5%BC%8F%E5%85%A8%E8%A7%A3%E6%9E%90" tabindex="-1">查询模式全解析</h2><h3 id="1.-%E5%A4%8D%E5%90%88%E5%B8%83%E5%B0%94%E6%9F%A5%E8%AF%A2" tabindex="-1">1. 复合布尔查询</h3><pre><code class="language-csharp">var boolQuery = new BooleanQuery {    { new TermQuery(new Term(&quot;title&quot;, &quot;ASP.NET&quot;)), Occur.MUST },    { NumericRangeQuery.NewInt32Range(&quot;views&quot;, 1000, 5000, true, true), Occur.SHOULD },    { new PrefixQuery(new Term(&quot;category&quot;, &quot;/tech/&quot;)), Occur.FILTER }};</code></pre><h3 id="2.-%E7%9F%AD%E8%AF%AD%E8%BF%91%E4%BC%BC%E6%90%9C%E7%B4%A2" tabindex="-1">2. 短语近似搜索</h3><pre><code class="language-csharp">var phraseQuery = new PhraseQuery {    new Term(&quot;content&quot;, &quot;云原生&quot;),    new Term(&quot;content&quot;, &quot;架构&quot;)};phraseQuery.Slop = 3; // 允许中间间隔3个词</code></pre><h3 id="3.-%E6%AD%A3%E5%88%99%E8%A1%A8%E8%BE%BE%E5%BC%8F%E6%9F%A5%E8%AF%A2" tabindex="-1">3. 正则表达式查询</h3><pre><code class="language-csharp">var regexQuery = new RegexQuery(new Term(&quot;email&quot;, @&quot;^user\d+@domain\.com$&quot;));</code></pre><h3 id="4.-%E7%A9%BA%E9%97%B4%E4%BD%8D%E7%BD%AE%E6%90%9C%E7%B4%A2" tabindex="-1">4. 空间位置搜索</h3><pre><code class="language-csharp">var ctx = SpatialContext.Geo;var strategy = new RecursivePrefixTreeStrategy(new QuadPrefixTree(ctx), &quot;geo&quot;);var query = strategy.MakeQuery(new SpatialArgs(    SpatialOperation.Intersects,    ctx.MakeCircle(116.4074, 39.9042, DistanceUtils.Degrees2Dist(10, DistanceUtils.EARTH_MEAN_RADIUS_KM)));</code></pre><h2 id="%E9%AB%98%E9%98%B6%E5%BA%94%E7%94%A8%E6%8A%80%E5%B7%A7" tabindex="-1">高阶应用技巧</h2><h3 id="1.-%E7%B4%A2%E5%BC%95%E4%BC%98%E5%8C%96%E7%AD%96%E7%95%A5" tabindex="-1">1. 索引优化策略</h3><pre><code class="language-csharp">var config = new IndexWriterConfig(LuceneVersion.LUCENE_48, analyzer){    UseCompoundFile = false, // 提升IO性能    RAMBufferSizeMB = 512,   // 内存缓冲区    MergePolicy = new TieredMergePolicy {        SegmentsPerTier = 10,        MaxMergeAtOnce = 5    }};</code></pre><h3 id="2.-%E8%87%AA%E5%AE%9A%E4%B9%89%E7%9B%B8%E4%BC%BC%E5%BA%A6%E7%AE%97%E6%B3%95" tabindex="-1">2. 自定义相似度算法</h3><pre><code class="language-csharp">public class CustomSimilarity : BM25Similarity{    protected override float Idf(long docFreq, long numDocs)    {        return (float)(Math.Log(numDocs / (docFreq + 1)) + 1.0);    }}searcher.Similarity = new CustomSimilarity();</code></pre><h3 id="3.-%E6%90%9C%E7%B4%A2%E7%83%AD%E8%AF%8D%E7%BB%9F%E8%AE%A1" tabindex="-1">3. 搜索热词统计</h3><pre><code class="language-csharp">using var reader = DirectoryReader.Open(directory);var fields = new[] { &quot;content&quot; };var result = HighFreqTerms.GetHighFreqTerms(    reader,     10,     fields,     new HighFreqTerms.TotalTermFreqComparator());</code></pre><h2 id="%E5%AE%9E%E6%88%98%E4%BB%A3%E7%A0%81%E7%A4%BA%E4%BE%8B" tabindex="-1"><a id="实战代码示例"></a>实战代码示例</h2><pre><code class="language-csharp">[ApiController][Route(&quot;api/search&quot;)]public class SearchController : ControllerBase{    private const LuceneVersion AppLuceneVersion = LuceneVersion.LUCENE_48;    private static readonly string IndexPath = Path.Combine(Environment.CurrentDirectory, &quot;search_index&quot;);        // 线程安全的IndexWriter单例    private static readonly Lazy&lt;IndexWriter&gt; LazyWriter = new(() =&gt;    {        var analyzer = new SmartChineseAnalyzer(AppLuceneVersion);        var config = new IndexWriterConfig(AppLuceneVersion, analyzer)        {            OpenMode = OpenMode.CREATE_OR_APPEND,            CommitOnClose = true        };        return new IndexWriter(FSDirectory.Open(IndexPath), config);    });    [HttpPost(&quot;index&quot;)]    public IActionResult IndexDocument([FromBody] SearchDocument doc)    {        var writer = LazyWriter.Value;        var document = new Document        {            new TextField(&quot;title&quot;, doc.Title, Field.Store.YES),            new TextField(&quot;content&quot;, doc.Content, Field.Store.NO),            new Int32Field(&quot;views&quot;, doc.Views, Field.Store.YES),            new SortedDocValuesField(&quot;sort_order&quot;,                 new Int32(AppLuceneVersion, doc.Score, Int32.MaxValue))        };        writer.UpdateDocument(new Term(&quot;id&quot;, doc.Id.ToString()), document);        return Ok();    }    [HttpGet(&quot;query&quot;)]    public IActionResult Search(string q, int page = 1, int size = 10)    {        using var reader = LazyWriter.Value.GetReader(applyAllDeletes: true);        var searcher = new IndexSearcher(reader);                var parser = new MultiFieldQueryParser(            AppLuceneVersion,            new[] { &quot;title^2&quot;, &quot;content&quot; },            new SmartChineseAnalyzer(AppLuceneVersion));                var query = parser.Parse(QueryParserBase.Escape(q));                var collector = TopScoreDocCollector.Create(size * page, null);        searcher.Search(query, collector);                var results = collector.GetTopDocs((page-1)*size, size)            .ScoreDocs.Select(d =&gt;             {                var doc = searcher.Doc(d.Doc);                return new SearchResult {                    Title = doc.Get(&quot;title&quot;),                    Score = d.Score                };            });                return Ok(new PagedResult(results, page, size));    }}</code></pre><h2 id="%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5%E5%BB%BA%E8%AE%AE" tabindex="-1"><a id="最佳实践建议"></a>最佳实践建议</h2><ol><li><p><strong>索引优化</strong></p><ul><li>定期执行<code>ForceMerge(1)</code>合并分段</li><li>使用<code>MMapDirectory</code>提升大文件读取性能</li><li>设置合适的<code>RAMBufferSizeMB</code>（通常为可用内存的50%）</li></ul></li><li><p><strong>查询优化</strong></p><ul><li>避免使用前导通配符查询（如<code>*term</code>）</li><li>对数值范围查询优先使用<code>PointRangeQuery</code></li><li>使用<code>Filter</code>缓存高频过滤条件</li></ul></li><li><p><strong>安全防护</strong></p><pre><code class="language-csharp">// 防御查询注入public static string SanitizeQuery(string input){    return QueryParserBase.Escape(input)        .Replace(&quot;&#39;&quot;, &quot;&quot;)        .Replace(&quot;\&quot;&quot;, &quot;&quot;);}</code></pre></li></ol><h2 id="%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98%E6%8E%92%E6%9F%A5" tabindex="-1"><a id="常见问题排查"></a>常见问题排查</h2><p><strong>Q1：索引文件被锁定？</strong></p><pre><code class="language-csharp">// 强制解除锁定if (Directory.ListAll().Any(f =&gt; f.EndsWith(&quot;.lock&quot;))){    Global.Unlock(FSDirectory.Open(IndexPath));}</code></pre><p><strong>Q2：中文分词不准确？</strong></p><p>推荐组合使用：</p><pre><code class="language-csharp">var analyzer = new AnalyzerWrapper(    defaultAnalyzer: new SmartChineseAnalyzer(AppLuceneVersion),    new SynonymAnalyzer(new ChineseSynonyms()));</code></pre><p><strong>Q3：搜索结果相关性低？</strong></p><p>调整BM25参数：</p><pre><code class="language-csharp">searcher.Similarity = new BM25Similarity(k1: 1.2f, b: 0.75f);</code></pre><p><strong>Q4：内存持续增长？</strong></p><p>检查：</p><ol><li>确保所有<code>IndexReader</code>正确Dispose</li><li>限制<code>IndexWriter</code>缓存大小</li><li>避免频繁创建临时Directory</li></ol><hr />]]>
                    </description>
                    <pubDate>Wed, 20 Dec 2023 15:38:44 EST</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[JavaScript授权Gps，音频，视频踩坑]]>
                    </title>
                    <link>https://wangyou233.wang/archives/184</link>
                    <description>
                            <![CDATA[<h1 id="%E6%8E%88%E6%9D%83" tabindex="-1">授权</h1><h2 id="gps-%E6%8E%88%E6%9D%83" tabindex="-1">GPS 授权</h2><pre><code class="language-javascript">function getLocation(){    if (navigator.geolocation)    {        navigator.geolocation.getCurrentPosition(showPosition, showError);    }    else    {        console.log(&quot;该浏览器不支持获取地理位置。&quot;);    }}function showPosition(position){    console.log(&quot;纬度: &quot; + position.coords.latitude +    &quot; 经度: &quot; + position.coords.longitude);    }function showError(error) {    console.log(error);}</code></pre><h2 id="%E9%9F%B3%E9%A2%91%E6%8E%88%E6%9D%83" tabindex="-1">音频授权</h2><pre><code class="language-javascript">if (navigator.mediaDevices.getUserMedia) {  const constraints = { audio: true };  navigator.mediaDevices.getUserMedia(constraints).then(    stream =&gt; {      console.log(&quot;授权成功！&quot;);    },    () =&gt; {      console.error(&quot;授权失败！&quot;);    }  );} else {  console.error(&quot;浏览器不支持 getUserMedia&quot;);}</code></pre><h2 id="%E6%91%84%E5%83%8F%E5%A4%B4%E8%A7%86%E9%A2%91%E6%8E%88%E6%9D%83" tabindex="-1">摄像头视频授权</h2><pre><code class="language-javascript">const constraints = {  video: true // 请求视频流};const videoElement = document.getElementById(&#39;video&#39;);navigator.mediaDevices.getUserMedia(constraints)  .then(stream =&gt; {    videoElement.srcObject = stream;  })  .catch(error =&gt; {    console.error(&#39;获取摄像头权限失败:&#39;, error);  });</code></pre><h1 id="%E9%81%87%E5%88%B0%E9%97%AE%E9%A2%98" tabindex="-1">遇到问题</h1><h2 id="%E6%B5%8F%E8%A7%88%E5%99%A8%E5%AE%89%E5%85%A8%E6%9C%BA%E5%88%B6" tabindex="-1">浏览器安全机制</h2><p>按照上面的步骤去做，理论上是可以实现我们的功能。但事实并非如此，不信你可以起个服务验证一下看看。</p><p>通过验证，你会发现在Chrome 浏览器中使用http://localhost:8080 或者 <a href="http://127.0.0.1:8080" target="_blank">http://127.0.0.1:8080</a> 可以正常获取到浏览器的地理位置，但通过IP或者域名的形式，如：<a href="http://172.21.3.82:8080" target="_blank">http://172.21.3.82:8080</a> 和 <a href="http://b.cunzhang.xn--com-928d52qr4bvt437iy77ccujsqjwymwht" target="_blank">http://b.cunzhang.com进行访问时却获取不到</a>。</p><p>为什么呢？打开控制台，你会发现有以下错误信息：<br />Only secure origins are allowed (see: <a href="https://goo.gl/Y0ZkNV" target="_blank">https://goo.gl/Y0ZkNV</a>).</p><p>“只有在安全来源的情况才才被允许”。错误信息里还包含了一个提示链接，我们不妨打开这个链接（<a href="https://goo.gl/Y0ZkNV%EF%BC%89%E7%9C%8B%E7%9C%8B%E3%80%82%E5%8E%9F%E6%9D%A5%EF%BC%8C%E4%B8%BA%E4%BA%86%E4%BF%9D%E9%9A%9C%E7%94%A8%E6%88%B7%E7%9A%84%E5%AE%89%E5%85%A8%EF%BC%8CChrome%E6%B5%8F%E8%A7%88%E5%99%A8%E8%AE%A4%E4%B8%BA%E5%8F%AA%E6%9C%89%E5%AE%89%E5%85%A8%E7%9A%84%E6%9D%A5%E6%BA%90%E6%89%8D%E8%83%BD%E5%BC%80%E5%90%AF%E5%AE%9A%E4%BD%8D%E6%9C%8D%E5%8A%A1%E3%80%82%E9%82%A3%E4%BB%80%E4%B9%88%E6%A0%B7%E6%89%8D%E7%AE%97%E6%98%AF%E5%AE%89%E5%85%A8%E7%9A%84%E6%9D%A5%E6%BA%90%E5%91%A2%EF%BC%9F%E5%9C%A8%E6%89%93%E5%BC%80%E9%93%BE%E6%8E%A5%E7%9A%84%E9%A1%B5%E9%9D%A2%E4%B8%8A%E6%9C%89%E8%BF%99%E4%B9%88%E4%B8%80%E6%AE%B5%E8%AF%9D%EF%BC%9A" target="_blank">https://goo.gl/Y0ZkNV）看看。原来，为了保障用户的安全，Chrome浏览器认为只有安全的来源才能开启定位服务。那什么样才算是安全的来源呢？在打开链接的页面上有这么一段话：</a></p><p>“Secure origins” are origins that match at least one of the following (scheme, host, port) patterns:</p><ul><li><p>(https, *, *)</p></li><li><p>(wss, *, *)</p></li><li><p>(*, localhost, *)</p></li><li><p>(*, 127/8, *)</p></li><li><p>(*, ::1/128, *)</p></li><li><p>(file, *, —)</p></li><li><p>(chrome-extension, *, —)</p></li></ul><p>This list may be incomplete, and may need to be changed. Please discuss!</p><p>大概意思是说只有包含上述列表中的scheme、host或者port才会被认为是安全的来源，现在这个列表还不够完整，后续可能还会有变动，有待讨论。</p><p>这就可以解释了为什么在http://localhost:8080 和 http://127.0.0.1:8080访问下可以获取到浏览器的地理位置，而在http://172.21.3.82:8080 和 <a href="http://b.cunzhang.com" target="_blank">http://b.cunzhang.com</a> 确获取不到了。</p><h3 id="%E6%96%B9%E6%B3%95%E4%B8%80" tabindex="-1">方法一</h3><p>如果需要在域名访问的基础上实现地位位置的定位，那我们只能把http协议升级为https了。</p><h3 id="%E6%96%B9%E6%B3%95%E4%BA%8C" tabindex="-1">方法二</h3><p>在浏览器地址栏中输入<code>chrome://flags/#unsafely-treat-insecure-origin-as-secure</code>，回车，如下图，将该选项置为<br />，在输入框中输入需要访问的地址，多个地址使用“,”隔开，然后点击右下角弹出的<br />按钮，自动重启浏览器之后就可以在添加的http地址下调用摄像头和麦克风，地址了。</p><p><img src="/upload/2023/11/image.png" alt="image" /></p>]]>
                    </description>
                    <pubDate>Mon, 27 Nov 2023 16:11:50 EST</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[.Net7 针对Utc时区转换问题中间件]]>
                    </title>
                    <link>https://wangyou233.wang/archives/183</link>
                    <description>
                            <![CDATA[<h1 id="%E4%B8%BA%E4%BB%80%E4%B9%88%E8%A6%81%E5%AD%98%E5%82%A8utc%E6%97%B6%E9%97%B4" tabindex="-1">为什么要存储UTC时间</h1><p>存储UTC（协调世界时）时间是为了在跨时区和时间转换的情况下保持时间的一致性和准确性。以下是一些原因：</p><ol><li><p>时区独立性：UTC时间是不受时区影响的，它是全球标准时间。通过存储UTC时间，可以避免在不同时区之间进行转换和处理，确保时间的一致性，无论用户位于何处。</p></li><li><p>跨时区应用：对于涉及跨时区操作的应用程序，使用UTC时间可以简化时间计算和比较。如果存储本地时间，可能会面临时区转换和夏令时变更等问题，导致时间计算不准确。</p></li><li><p>数据一致性：当多个系统或数据库需要共享时间信息时，使用UTC时间可以确保数据的一致性。不同系统和数据库可以根据需要将UTC时间转换为本地时间进行显示，以满足用户的需求，而不会引起数据不一致的问题。</p></li><li><p>日志和审计：在日志记录和审计方面，使用UTC时间可以提供统一的时间戳，使得跨系统和跨地点的日志可以进行准确的时间排序和比较。</p></li><li><p>时间计算的准确性：在进行时间计算、持续时间计算或时间间隔比较时，使用UTC时间可以避免由于夏令时等因素引起的时间偏移和不一致性，确保计算的准确性。</p></li></ol><p>总之，存储UTC时间可以提供时间的一致性、跨时区操作的便利性以及数据的一致性，尤其适用于涉及多个时区的应用程序和系统。当需要进行时间转换或比较时，可以将UTC时间转换为本地时间进行显示，以满足最终用户的需求。</p><h2 id="%E4%BB%A3%E7%A0%81%E4%B8%AD%E5%AE%9E%E7%8E%B0" tabindex="-1">代码中实现</h2><pre><code class="language-csharp">var dateTime = Datetime.UtcNow;</code></pre><p><code>如果这样实现的话那么接口请求的时间和请求返回的时间都需要手动处理，这样不不够优雅的</code></p><h2 id="%E5%A6%82%E4%BD%95%E5%A4%84%E7%90%86" tabindex="-1">如何处理</h2><p>安装<code>Newtonsoft.Json</code>库<br />增加<code>DateTimeFilter.cs</code></p><pre><code class="language-csharp">public class DateTimeFilter : IActionFilter{    public void OnActionExecuting(ActionExecutingContext context)    {        var parameters = context.ActionArguments;        foreach (var parameter in parameters)        {                       if (parameter.Value is DateTime dateTime)            {                dateTime = dateTime.ToUniversalTime();                context.ActionArguments[parameter.Key] = dateTime;            }        }    }    public void OnActionExecuted(ActionExecutedContext context)    {    }}</code></pre><p>增加<code>DateTimeJsonConverter.cs</code></p><pre><code class="language-csharp">public class DateTimeJsonConverter : DateTimeConverterBase{        public DateTimeJsonConverter()    {    }          public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)    {                        // 用户配置的时区 ，建议使用Redis 或者内存缓存        var setting = &quot;&quot;;        if (setting.IsNullOrEmpty())        {            DateTime dateTime = (DateTime)value;            writer.WriteValue(dateTime.ToLocalTime());        }        else        {            TimeZoneInfo targetTimeZone = TimeZoneInfo.FindSystemTimeZoneById(setting);            DateTime targetTime = TimeZoneInfo.ConvertTimeFromUtc((DateTime) value, targetTimeZone);            writer.WriteValue(targetTime);        }    }    public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)    {        DateTime dateTime = (DateTime)reader.Value;        return dateTime.ToUniversalTime();    }}</code></pre><blockquote><p>在Program.cs 中注册</p></blockquote><pre><code class="language-csharp">builder.Services.AddControllers(o =&gt;{        o.Filters.Add(typeof(DateTimeFilter));}).AddNewtonsoftJson(options =&gt;{    options.SerializerSettings.ReferenceLoopHandling = ReferenceLoopHandling.Ignore;    options.SerializerSettings.ContractResolver = new DefaultContractResolver();    options.SerializerSettings.DateFormatString = &quot;yyyy-MM-dd HH:mm:ss&quot;;    options.SerializerSettings.Converters.Add(new StringEnumConverter());    options.SerializerSettings.Converters.Add(new DateTimeJsonConverter());})</code></pre><p><code>DateTimeFilter</code> 是针对以下接口入参形式</p><pre><code class="language-csharp">        [HttpGet(&quot;utc&quot;)]        public IActionResult Utc(DateTime utc)        {            return Ok(utc);        }</code></pre><p><code>DateTimeJsonConverter</code>  是捕获不到接口的参数，只能捕获入参对象，和返回对象</p><pre><code class="language-csharp">    public class MyClass        {            public DateTime Utc { get; set; }                        public MyClass1 MyClassx { get; set; }            public class MyClass1            {                public DateTime Utc1 { get; set; }            }        }        [HttpGet(&quot;utc&quot;)]        public IActionResult Utc(MyClass utc)        {            return Ok(utc);        }                //获取所有时区         [HttpGet(&quot;TimeZone&quot;)]        public IActionResult List()        {            var list = TimeZoneInfo.GetSystemTimeZones();            return Ok(list.Select(x =&gt;            {                return new                {                    Id = x.Id,                    DisplayName = x.DisplayName                };            }).ToList());        }</code></pre><p>这样就实现对传入参数和返回参数 DateTime 类型转UTC时间再转回当前时区的实现<br />在<code> Controller</code>和 <code>Services</code>及<code>Repository</code>就不用再处理Datetime 类型时区转换的代码了， 直接可以使用</p>]]>
                    </description>
                    <pubDate>Tue, 31 Oct 2023 18:57:39 EDT</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[Abp.Zero框架升级]]>
                    </title>
                    <link>https://wangyou233.wang/archives/153</link>
                    <description>
                            <![CDATA[<h1 id="abp.zero%E6%A1%86%E6%9E%B6%E5%8D%87%E7%BA%A7" tabindex="-1">Abp.Zero框架升级</h1><p>本次采用的是项目迁移，注意问题</p><h2 id="apppermissions-%E6%96%87%E4%BB%B6%E5%86%85%E5%AE%B9" tabindex="-1">AppPermissions 文件内容</h2><h2 id="appsettingprovider-%E6%96%87%E4%BB%B6%E5%86%85%E5%AE%B9" tabindex="-1">AppSettingProvider 文件内容</h2><h2 id="uicustomizationsettingsappservice-%E6%96%87%E4%BB%B6%E5%86%85%E5%AE%B9" tabindex="-1">UiCustomizationSettingsAppService 文件内容</h2><h2 id="startup.cs%E6%96%87%E4%BB%B6" tabindex="-1">Startup.cs文件</h2><p><strong>注意内容及顺序</strong></p><pre><code class="language-csharp"> public IServiceProvider ConfigureServices(IServiceCollection services)        {                    services.AddControllersWithViews(options =&gt;            {                options.Filters.Add(new AbpAutoValidateAntiforgeryTokenAttribute());            })#if DEBUG                .AddRazorRuntimeCompilation()#endif                .AddNewtonsoftJson();                       //配置Cookie策略,不然部分浏览器非SSL无法登录系统            services.Configure&lt;CookiePolicyOptions&gt;(options =&gt;            {                options.MinimumSameSitePolicy = SameSiteMode.Lax;            });        }                 public void Configure(IApplicationBuilder app, IWebHostEnvironment env, ILoggerFactory loggerFactory)        {                    app.UseCookiePolicy(); app.UseRouting();            app.UseAuthentication();            app.UseAuthorization();            app.UseEndpoints(endpoints =&gt;            {                endpoints.MapHub&lt;AbpCommonHub&gt;(&quot;/signalr&quot;);                endpoints.MapControllerRoute(&quot;defaultWithArea&quot;, &quot;{area}/{controller=Home}/{action=Index}/{id?}&quot;);                endpoints.MapControllerRoute(&quot;default&quot;, &quot;{controller=Home}/{action=Index}/{id?}&quot;);            });        }</code></pre><h2 id="%E8%BF%81%E7%A7%BB%E8%84%9A%E6%9C%AC%E5%91%BD%E4%BB%A4" tabindex="-1">迁移脚本命令</h2><h3 id="%E8%BF%81%E7%A7%BB%E8%84%9A%E6%9C%AC%E9%9C%80%E6%B3%A8%E6%84%8F%E6%96%87%E4%BB%B6%E5%A4%84%E7%90%86" tabindex="-1">迁移脚本需注意文件处理</h3><blockquote><p>WebContentDirectoryFinder.cs</p></blockquote><h3 id="ef-core%E8%BF%81%E7%A7%BB%E5%91%BD%E4%BB%A4" tabindex="-1">EF Core迁移命令</h3><pre><code class="language-cmd">dotnet ef database update --startup-project=./x.Web.Mvc --project=./x.EntityFrameworkCore --context=xDbContext</code></pre><h3 id="ef-coresql" tabindex="-1">EF CoreSQL</h3><pre><code class="language-cmd">dotnet ef migrations script Upgrated_To_ABP_4_8_0  Upgraded_To_Abp_6_4_0  --startup-project=./x.Web.Mvc --project=./x.EntityFrameworkCore --context=xDbContext</code></pre><pre><code class="language-sql">BEGIN TRANSACTION;GOALTER TABLE [AbpEditions] ADD [DailyPrice] decimal(18,2) NULL;GOALTER TABLE [AbpEditions] ADD [WeeklyPrice] decimal(18,2) NULL;GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20190801133107_Updated_SubscribableEdition&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GOBEGIN TRANSACTION;GOCREATE TABLE [AppSubscriptionPaymentsExtensionData] (    [Id] bigint NOT NULL IDENTITY,    [SubscriptionPaymentId] bigint NOT NULL,    [Key] nvarchar(450) NULL,    [Value] nvarchar(max) NULL,    [IsDeleted] bit NOT NULL,    CONSTRAINT [PK_AppSubscriptionPaymentsExtensionData] PRIMARY KEY ([Id]));GOCREATE UNIQUE INDEX [IX_AppSubscriptionPaymentsExtensionData_SubscriptionPaymentId_Key_IsDeleted] ON [AppSubscriptionPaymentsExtensionData] ([SubscriptionPaymentId], [Key], [IsDeleted]) WHERE [Key] IS NOT NULL;GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20191015062846_Add_Subscription_Payment_Extension_Data&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GOBEGIN TRANSACTION;GOALTER TABLE [AppSubscriptionPayments] ADD [EditionPaymentType] int NOT NULL DEFAULT 0;GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20191120123128_Add-EditionPaymentType-To-SubscriptionPayment&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GOBEGIN TRANSACTION;GODROP INDEX [IX_AbpUserLoginAttempts_TenancyName_UserNameOrEmailAddress_Result] ON [AbpUserLoginAttempts];DECLARE @var0 sysname;SELECT @var0 = [d].[name]FROM [sys].[default_constraints] [d]INNER JOIN [sys].[columns] [c] ON [d].[parent_column_id] = [c].[column_id] AND [d].[parent_object_id] = [c].[object_id]WHERE ([d].[parent_object_id] = OBJECT_ID(N&#39;[AbpUserLoginAttempts]&#39;) AND [c].[name] = N&#39;UserNameOrEmailAddress&#39;);IF @var0 IS NOT NULL EXEC(N&#39;ALTER TABLE [AbpUserLoginAttempts] DROP CONSTRAINT [&#39; + @var0 + &#39;];&#39;);ALTER TABLE [AbpUserLoginAttempts] ALTER COLUMN [UserNameOrEmailAddress] nvarchar(256) NULL;CREATE INDEX [IX_AbpUserLoginAttempts_TenancyName_UserNameOrEmailAddress_Result] ON [AbpUserLoginAttempts] ([TenancyName], [UserNameOrEmailAddress], [Result]);GODECLARE @var1 sysname;SELECT @var1 = [d].[name]FROM [sys].[default_constraints] [d]INNER JOIN [sys].[columns] [c] ON [d].[parent_column_id] = [c].[column_id] AND [d].[parent_object_id] = [c].[object_id]WHERE ([d].[parent_object_id] = OBJECT_ID(N&#39;[AbpSettings]&#39;) AND [c].[name] = N&#39;Value&#39;);IF @var1 IS NOT NULL EXEC(N&#39;ALTER TABLE [AbpSettings] DROP CONSTRAINT [&#39; + @var1 + &#39;];&#39;);ALTER TABLE [AbpSettings] ALTER COLUMN [Value] nvarchar(max) NULL;GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20191213093244_Upgraded_To_ABP_5_1&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GOBEGIN TRANSACTION;GOCREATE TABLE [AbpWebhookEvents] (    [Id] uniqueidentifier NOT NULL,    [WebhookName] nvarchar(max) NOT NULL,    [Data] nvarchar(max) NULL,    [CreationTime] datetime2 NOT NULL,    [TenantId] int NULL,    [IsDeleted] bit NOT NULL,    [DeletionTime] datetime2 NULL,    CONSTRAINT [PK_AbpWebhookEvents] PRIMARY KEY ([Id]));GOCREATE TABLE [AbpWebhookSubscriptions] (    [Id] uniqueidentifier NOT NULL,    [CreationTime] datetime2 NOT NULL,    [CreatorUserId] bigint NULL,    [TenantId] int NULL,    [WebhookUri] nvarchar(max) NOT NULL,    [Secret] nvarchar(max) NOT NULL,    [IsActive] bit NOT NULL,    [Webhooks] nvarchar(max) NULL,    [Headers] nvarchar(max) NULL,    CONSTRAINT [PK_AbpWebhookSubscriptions] PRIMARY KEY ([Id]));GOCREATE TABLE [AbpWebhookSendAttempts] (    [Id] uniqueidentifier NOT NULL,    [WebhookEventId] uniqueidentifier NOT NULL,    [WebhookSubscriptionId] uniqueidentifier NOT NULL,    [Response] nvarchar(max) NULL,    [ResponseStatusCode] int NULL,    [CreationTime] datetime2 NOT NULL,    [LastModificationTime] datetime2 NULL,    [TenantId] int NULL,    CONSTRAINT [PK_AbpWebhookSendAttempts] PRIMARY KEY ([Id]),    CONSTRAINT [FK_AbpWebhookSendAttempts_AbpWebhookEvents_WebhookEventId] FOREIGN KEY ([WebhookEventId]) REFERENCES [AbpWebhookEvents] ([Id]) ON DELETE CASCADE);GOCREATE INDEX [IX_AbpWebhookSendAttempts_WebhookEventId] ON [AbpWebhookSendAttempts] ([WebhookEventId]);GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20200117141413_Upgraded_To_ABP_5_2_0&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GOBEGIN TRANSACTION;GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20200305082815_Upgraded_To_Abp_5_3&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GOBEGIN TRANSACTION;GOCREATE TABLE [AppUserDelegations] (    [Id] bigint NOT NULL IDENTITY,    [CreationTime] datetime2 NOT NULL,    [CreatorUserId] bigint NULL,    [LastModificationTime] datetime2 NULL,    [LastModifierUserId] bigint NULL,    [IsDeleted] bit NOT NULL,    [DeleterUserId] bigint NULL,    [DeletionTime] datetime2 NULL,    [SourceUserId] bigint NOT NULL,    [TargetUserId] bigint NOT NULL,    [TenantId] int NULL,    [StartTime] datetime2 NOT NULL,    [EndTime] datetime2 NOT NULL,    CONSTRAINT [PK_AppUserDelegations] PRIMARY KEY ([Id]));GOCREATE INDEX [IX_AppUserDelegations_TenantId_SourceUserId] ON [AppUserDelegations] ([TenantId], [SourceUserId]);GOCREATE INDEX [IX_AppUserDelegations_TenantId_TargetUserId] ON [AppUserDelegations] ([TenantId], [TargetUserId]);GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20200315101156_Added_UserDelegations_Entity&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GOBEGIN TRANSACTION;GOCREATE TABLE [AbpDynamicParameters] (    [Id] int NOT NULL IDENTITY,    [ParameterName] nvarchar(450) NULL,    [InputType] nvarchar(max) NULL,    [Permission] nvarchar(max) NULL,    [TenantId] int NULL,    CONSTRAINT [PK_AbpDynamicParameters] PRIMARY KEY ([Id]));GOCREATE TABLE [AbpDynamicParameterValues] (    [Id] int NOT NULL IDENTITY,    [Value] nvarchar(max) NOT NULL,    [TenantId] int NULL,    [DynamicParameterId] int NOT NULL,    CONSTRAINT [PK_AbpDynamicParameterValues] PRIMARY KEY ([Id]),    CONSTRAINT [FK_AbpDynamicParameterValues_AbpDynamicParameters_DynamicParameterId] FOREIGN KEY ([DynamicParameterId]) REFERENCES [AbpDynamicParameters] ([Id]) ON DELETE CASCADE);GOCREATE TABLE [AbpEntityDynamicParameters] (    [Id] int NOT NULL IDENTITY,    [EntityFullName] nvarchar(450) NULL,    [DynamicParameterId] int NOT NULL,    [TenantId] int NULL,    CONSTRAINT [PK_AbpEntityDynamicParameters] PRIMARY KEY ([Id]),    CONSTRAINT [FK_AbpEntityDynamicParameters_AbpDynamicParameters_DynamicParameterId] FOREIGN KEY ([DynamicParameterId]) REFERENCES [AbpDynamicParameters] ([Id]) ON DELETE CASCADE);GOCREATE TABLE [AbpEntityDynamicParameterValues] (    [Id] int NOT NULL IDENTITY,    [Value] nvarchar(max) NOT NULL,    [EntityId] nvarchar(max) NULL,    [EntityDynamicParameterId] int NOT NULL,    [TenantId] int NULL,    CONSTRAINT [PK_AbpEntityDynamicParameterValues] PRIMARY KEY ([Id]),    CONSTRAINT [FK_AbpEntityDynamicParameterValues_AbpEntityDynamicParameters_EntityDynamicParameterId] FOREIGN KEY ([EntityDynamicParameterId]) REFERENCES [AbpEntityDynamicParameters] ([Id]) ON DELETE CASCADE);GOCREATE UNIQUE INDEX [IX_AbpDynamicParameters_ParameterName_TenantId] ON [AbpDynamicParameters] ([ParameterName], [TenantId]) WHERE [ParameterName] IS NOT NULL AND [TenantId] IS NOT NULL;GOCREATE INDEX [IX_AbpDynamicParameterValues_DynamicParameterId] ON [AbpDynamicParameterValues] ([DynamicParameterId]);GOCREATE INDEX [IX_AbpEntityDynamicParameters_DynamicParameterId] ON [AbpEntityDynamicParameters] ([DynamicParameterId]);GOCREATE UNIQUE INDEX [IX_AbpEntityDynamicParameters_EntityFullName_DynamicParameterId_TenantId] ON [AbpEntityDynamicParameters] ([EntityFullName], [DynamicParameterId], [TenantId]) WHERE [EntityFullName] IS NOT NULL AND [TenantId] IS NOT NULL;GOCREATE INDEX [IX_AbpEntityDynamicParameterValues_EntityDynamicParameterId] ON [AbpEntityDynamicParameterValues] ([EntityDynamicParameterId]);GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20200317114116_Add_Dynamic_Entity_Parameters&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GOBEGIN TRANSACTION;GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20200406060103_Remove_OrganizationUnit_Unique_Index&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GOBEGIN TRANSACTION;GODROP TABLE [AbpDynamicParameterValues];GODROP TABLE [AbpEntityDynamicParameterValues];GODROP TABLE [AbpEntityDynamicParameters];GODROP TABLE [AbpDynamicParameters];GOCREATE TABLE [AbpDynamicProperties] (    [Id] int NOT NULL IDENTITY,    [PropertyName] nvarchar(450) NULL,    [InputType] nvarchar(max) NULL,    [Permission] nvarchar(max) NULL,    [TenantId] int NULL,    CONSTRAINT [PK_AbpDynamicProperties] PRIMARY KEY ([Id]));GOCREATE TABLE [AbpDynamicEntityProperties] (    [Id] int NOT NULL IDENTITY,    [EntityFullName] nvarchar(450) NULL,    [DynamicPropertyId] int NOT NULL,    [TenantId] int NULL,    CONSTRAINT [PK_AbpDynamicEntityProperties] PRIMARY KEY ([Id]),    CONSTRAINT [FK_AbpDynamicEntityProperties_AbpDynamicProperties_DynamicPropertyId] FOREIGN KEY ([DynamicPropertyId]) REFERENCES [AbpDynamicProperties] ([Id]) ON DELETE CASCADE);GOCREATE TABLE [AbpDynamicPropertyValues] (    [Id] int NOT NULL IDENTITY,    [Value] nvarchar(max) NOT NULL,    [TenantId] int NULL,    [DynamicPropertyId] int NOT NULL,    CONSTRAINT [PK_AbpDynamicPropertyValues] PRIMARY KEY ([Id]),    CONSTRAINT [FK_AbpDynamicPropertyValues_AbpDynamicProperties_DynamicPropertyId] FOREIGN KEY ([DynamicPropertyId]) REFERENCES [AbpDynamicProperties] ([Id]) ON DELETE CASCADE);GOCREATE TABLE [AbpDynamicEntityPropertyValues] (    [Id] int NOT NULL IDENTITY,    [Value] nvarchar(max) NOT NULL,    [EntityId] nvarchar(max) NULL,    [DynamicEntityPropertyId] int NOT NULL,    [TenantId] int NULL,    CONSTRAINT [PK_AbpDynamicEntityPropertyValues] PRIMARY KEY ([Id]),    CONSTRAINT [FK_AbpDynamicEntityPropertyValues_AbpDynamicEntityProperties_DynamicEntityPropertyId] FOREIGN KEY ([DynamicEntityPropertyId]) REFERENCES [AbpDynamicEntityProperties] ([Id]) ON DELETE CASCADE);GOCREATE INDEX [IX_AbpDynamicEntityProperties_DynamicPropertyId] ON [AbpDynamicEntityProperties] ([DynamicPropertyId]);GOCREATE UNIQUE INDEX [IX_AbpDynamicEntityProperties_EntityFullName_DynamicPropertyId_TenantId] ON [AbpDynamicEntityProperties] ([EntityFullName], [DynamicPropertyId], [TenantId]) WHERE [EntityFullName] IS NOT NULL AND [TenantId] IS NOT NULL;GOCREATE INDEX [IX_AbpDynamicEntityPropertyValues_DynamicEntityPropertyId] ON [AbpDynamicEntityPropertyValues] ([DynamicEntityPropertyId]);GOCREATE UNIQUE INDEX [IX_AbpDynamicProperties_PropertyName_TenantId] ON [AbpDynamicProperties] ([PropertyName], [TenantId]) WHERE [PropertyName] IS NOT NULL AND [TenantId] IS NOT NULL;GOCREATE INDEX [IX_AbpDynamicPropertyValues_DynamicPropertyId] ON [AbpDynamicPropertyValues] ([DynamicPropertyId]);GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20200805083139_Upgraded_To_Abp_5_11&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GOBEGIN TRANSACTION;GOALTER TABLE [AppBinaryObjects] ADD [Description] nvarchar(max) NULL;GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20200928121432_Add_Description_To_Binary_Object&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GOBEGIN TRANSACTION;GOALTER TABLE [AbpPersistedGrants] ADD [ConsumedTime] datetime2 NULL;GOALTER TABLE [AbpPersistedGrants] ADD [Description] nvarchar(200) NULL;GOALTER TABLE [AbpPersistedGrants] ADD [SessionId] nvarchar(100) NULL;GOCREATE INDEX [IX_AbpPersistedGrants_Expiration] ON [AbpPersistedGrants] ([Expiration]);GOCREATE INDEX [IX_AbpPersistedGrants_SubjectId_SessionId_Type] ON [AbpPersistedGrants] ([SubjectId], [SessionId], [Type]);GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20201020131501_Upgraded_To_IdentityServer_v4&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GOBEGIN TRANSACTION;GOALTER TABLE [AbpEntityPropertyChanges] ADD [NewValueHash] nvarchar(max) NULL;GOALTER TABLE [AbpEntityPropertyChanges] ADD [OriginalValueHash] nvarchar(max) NULL;GOALTER TABLE [AbpDynamicProperties] ADD [DisplayName] nvarchar(max) NULL;GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20201111120911_Upgraded_To_Abp_6_0&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GOBEGIN TRANSACTION;GOALTER TABLE [AbpDynamicPropertyValues] DROP CONSTRAINT [PK_AbpDynamicPropertyValues];GODECLARE @var2 sysname;SELECT @var2 = [d].[name]FROM [sys].[default_constraints] [d]INNER JOIN [sys].[columns] [c] ON [d].[parent_column_id] = [c].[column_id] AND [d].[parent_object_id] = [c].[object_id]WHERE ([d].[parent_object_id] = OBJECT_ID(N&#39;[AbpDynamicPropertyValues]&#39;) AND [c].[name] = N&#39;Id&#39;);IF @var2 IS NOT NULL EXEC(N&#39;ALTER TABLE [AbpDynamicPropertyValues] DROP CONSTRAINT [&#39; + @var2 + &#39;];&#39;);ALTER TABLE [AbpDynamicPropertyValues] DROP COLUMN [Id];GOALTER TABLE [AbpDynamicPropertyValues] ADD [Id] bigint NOT NULL IDENTITY;GOALTER TABLE [AbpDynamicPropertyValues] ADD CONSTRAINT [PK_AbpDynamicPropertyValues] PRIMARY KEY ([Id]);GOALTER TABLE [AbpDynamicEntityPropertyValues] DROP CONSTRAINT [PK_AbpDynamicEntityPropertyValues];GODECLARE @var3 sysname;SELECT @var3 = [d].[name]FROM [sys].[default_constraints] [d]INNER JOIN [sys].[columns] [c] ON [d].[parent_column_id] = [c].[column_id] AND [d].[parent_object_id] = [c].[object_id]WHERE ([d].[parent_object_id] = OBJECT_ID(N&#39;[AbpDynamicEntityPropertyValues]&#39;) AND [c].[name] = N&#39;Id&#39;);IF @var3 IS NOT NULL EXEC(N&#39;ALTER TABLE [AbpDynamicEntityPropertyValues] DROP CONSTRAINT [&#39; + @var3 + &#39;];&#39;);ALTER TABLE [AbpDynamicEntityPropertyValues] DROP COLUMN [Id];GOALTER TABLE [AbpDynamicEntityPropertyValues] ADD [Id] bigint NOT NULL IDENTITY;GOALTER TABLE [AbpDynamicEntityPropertyValues] ADD CONSTRAINT [PK_AbpDynamicEntityPropertyValues] PRIMARY KEY ([Id]);GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20201217075257_Upgrade_To_ABP_6_1&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GOBEGIN TRANSACTION;GODROP INDEX [IX_AbpDynamicProperties_PropertyName_TenantId] ON [AbpDynamicProperties];DECLARE @var4 sysname;SELECT @var4 = [d].[name]FROM [sys].[default_constraints] [d]INNER JOIN [sys].[columns] [c] ON [d].[parent_column_id] = [c].[column_id] AND [d].[parent_object_id] = [c].[object_id]WHERE ([d].[parent_object_id] = OBJECT_ID(N&#39;[AbpDynamicProperties]&#39;) AND [c].[name] = N&#39;PropertyName&#39;);IF @var4 IS NOT NULL EXEC(N&#39;ALTER TABLE [AbpDynamicProperties] DROP CONSTRAINT [&#39; + @var4 + &#39;];&#39;);ALTER TABLE [AbpDynamicProperties] ALTER COLUMN [PropertyName] nvarchar(256) NULL;CREATE UNIQUE INDEX [IX_AbpDynamicProperties_PropertyName_TenantId] ON [AbpDynamicProperties] ([PropertyName], [TenantId]) WHERE [PropertyName] IS NOT NULL AND [TenantId] IS NOT NULL;GODROP INDEX [IX_AbpDynamicEntityProperties_EntityFullName_DynamicPropertyId_TenantId] ON [AbpDynamicEntityProperties];DECLARE @var5 sysname;SELECT @var5 = [d].[name]FROM [sys].[default_constraints] [d]INNER JOIN [sys].[columns] [c] ON [d].[parent_column_id] = [c].[column_id] AND [d].[parent_object_id] = [c].[object_id]WHERE ([d].[parent_object_id] = OBJECT_ID(N&#39;[AbpDynamicEntityProperties]&#39;) AND [c].[name] = N&#39;EntityFullName&#39;);IF @var5 IS NOT NULL EXEC(N&#39;ALTER TABLE [AbpDynamicEntityProperties] DROP CONSTRAINT [&#39; + @var5 + &#39;];&#39;);ALTER TABLE [AbpDynamicEntityProperties] ALTER COLUMN [EntityFullName] nvarchar(256) NULL;CREATE UNIQUE INDEX [IX_AbpDynamicEntityProperties_EntityFullName_DynamicPropertyId_TenantId] ON [AbpDynamicEntityProperties] ([EntityFullName], [DynamicPropertyId], [TenantId]) WHERE [EntityFullName] IS NOT NULL AND [TenantId] IS NOT NULL;GOALTER TABLE [AbpAuditLogs] ADD [ExceptionMessage] nvarchar(1024) NULL;GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20210224123746_Upgraded_To_Abp_6_3&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GOBEGIN TRANSACTION;GOINSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])VALUES (N&#39;20210622135427_Upgraded_To_Abp_6_4_0&#39;, N&#39;5.0.10&#39;);GOCOMMIT;GO</code></pre><p>其中x为当前项目名</p><pre><code class="language-csharp"> var coreAssemblyDirectoryPath = Path.GetDirectoryName(typeof(SauryCoreModule).GetAssembly().Location);            if (coreAssemblyDirectoryPath == null)            {                throw new Exception(&quot;Could not find location of Saury.Core assembly!&quot;);            }            var directoryInfo = new DirectoryInfo(coreAssemblyDirectoryPath);            while (!DirectoryContains(directoryInfo.FullName, &quot;x.sln&quot;))            {                if (directoryInfo.Parent == null)                {                    throw new Exception(&quot;Could not find content root folder!&quot;);                }                directoryInfo = directoryInfo.Parent;            }            var webMvcFolder = Path.Combine(directoryInfo.FullName, $&quot;x.Web.Mvc&quot;);            if (Directory.Exists(webMvcFolder))            {                return webMvcFolder;            }            throw new Exception(&quot;Could not find root folder of the web project!&quot;);</code></pre><h2 id="core.localization-%E4%B8%8B%E7%9A%84%E5%A4%9A%E8%AF%AD%E8%A8%80%E5%91%BD%E4%BB%A4" tabindex="-1">Core.Localization 下的多语言命令</h2><h2 id="workflow-%E5%8D%87%E7%BA%A7%E6%B3%A8%E6%84%8F" tabindex="-1">WorkFlow 升级注意</h2><p>workflow 需升级</p><pre><code class="language-csproj">    &lt;PackageReference Include=&quot;WorkflowCore&quot; Version=&quot;3.6.0&quot; /&gt;    &lt;PackageReference Include=&quot;WorkflowCore.DSL&quot; Version=&quot;3.6.0&quot; /&gt;    &lt;PackageReference Include=&quot;WorkflowCore.Persistence.SqlServer&quot; Version=&quot;3.6.0&quot; /&gt;</code></pre><p>WorkflowCore.DSL 包安装</p><h3 id="%E5%A6%82%E6%9E%9C%E9%87%8D%E5%86%99%E4%BA%86iexecutionresultprocessor" tabindex="-1">如果重写了IExecutionResultProcessor</h3><p>请注意 根据官方文件补齐内容<br /><a href="https://github.dev/danielgerlag/workflow-core" target="_blank">https://github.dev/danielgerlag/workflow-core</a></p><pre><code class="language-csharp">using System;using System.Collections.Generic;using System.Linq;using Microsoft.Extensions.Logging;using WorkflowCore.Interface;using WorkflowCore.Models;using WorkflowCore.Models.LifeCycleEvents;namespace WorkflowCore.Services{    public class ExecutionResultProcessor : IExecutionResultProcessor    {        private readonly IExecutionPointerFactory _pointerFactory;        private readonly IDateTimeProvider _datetimeProvider;        private readonly ILogger _logger;        private readonly ILifeCycleEventPublisher _eventPublisher;        private readonly IEnumerable&lt;IWorkflowErrorHandler&gt; _errorHandlers;        private readonly WorkflowOptions _options;        public ExecutionResultProcessor(IExecutionPointerFactory pointerFactory, IDateTimeProvider datetimeProvider, ILifeCycleEventPublisher eventPublisher, IEnumerable&lt;IWorkflowErrorHandler&gt; errorHandlers, WorkflowOptions options, ILoggerFactory loggerFactory)        {            _pointerFactory = pointerFactory;            _datetimeProvider = datetimeProvider;            _eventPublisher = eventPublisher;            _errorHandlers = errorHandlers;            _options = options;            _logger = loggerFactory.CreateLogger&lt;ExecutionResultProcessor&gt;();        }        public void ProcessExecutionResult(WorkflowInstance workflow, WorkflowDefinition def, ExecutionPointer pointer, WorkflowStep step, ExecutionResult result, WorkflowExecutorResult workflowResult)        {            pointer.PersistenceData = result.PersistenceData;            pointer.Outcome = result.OutcomeValue;            if (result.SleepFor.HasValue)            {                pointer.SleepUntil = _datetimeProvider.UtcNow.Add(result.SleepFor.Value);                pointer.Status = PointerStatus.Sleeping;            }            if (!string.IsNullOrEmpty(result.EventName))            {                pointer.EventName = result.EventName;                pointer.EventKey = result.EventKey;                pointer.Active = false;                pointer.Status = PointerStatus.WaitingForEvent;                workflowResult.Subscriptions.Add(new EventSubscription                {                    WorkflowId = workflow.Id,                    StepId = pointer.StepId,                    ExecutionPointerId = pointer.Id,                    EventName = pointer.EventName,                    EventKey = pointer.EventKey,                    SubscribeAsOf = result.EventAsOf,                    SubscriptionData = result.SubscriptionData                });            }            if (result.Proceed)            {                pointer.Active = false;                pointer.EndTime = _datetimeProvider.UtcNow;                pointer.Status = PointerStatus.Complete;                                                foreach (var outcomeTarget in step.Outcomes.Where(x =&gt; x.Matches(result, workflow.Data)))                {                                        workflow.ExecutionPointers.Add(_pointerFactory.BuildNextPointer(def, pointer, outcomeTarget));                }                var pendingSubsequents = workflow.ExecutionPointers                    .FindByStatus(PointerStatus.PendingPredecessor)                    .Where(x =&gt; x.PredecessorId == pointer.Id);                foreach (var subsequent in pendingSubsequents)                {                    subsequent.Status = PointerStatus.Pending;                    subsequent.Active = true;                }                _eventPublisher.PublishNotification(new StepCompleted                {                    EventTimeUtc = _datetimeProvider.UtcNow,                    Reference = workflow.Reference,                    ExecutionPointerId = pointer.Id,                    StepId = step.Id,                    WorkflowInstanceId = workflow.Id,                    WorkflowDefinitionId = workflow.WorkflowDefinitionId,                    Version = workflow.Version                });            }            else            {                foreach (var branch in result.BranchValues)                {                    foreach (var childDefId in step.Children)                    {                           workflow.ExecutionPointers.Add(_pointerFactory.BuildChildPointer(def, pointer, childDefId, branch));                                            }                }            }        }        public void HandleStepException(WorkflowInstance workflow, WorkflowDefinition def, ExecutionPointer pointer, WorkflowStep step, Exception exception)        {            _eventPublisher.PublishNotification(new WorkflowError            {                EventTimeUtc = _datetimeProvider.UtcNow,                Reference = workflow.Reference,                WorkflowInstanceId = workflow.Id,                WorkflowDefinitionId = workflow.WorkflowDefinitionId,                Version = workflow.Version,                ExecutionPointerId = pointer.Id,                StepId = step.Id,                Message = exception.Message            });            pointer.Status = PointerStatus.Failed;                        var queue = new Queue&lt;ExecutionPointer&gt;();            queue.Enqueue(pointer);            while (queue.Count &gt; 0)            {                var exceptionPointer = queue.Dequeue();                var exceptionStep = def.Steps.FindById(exceptionPointer.StepId);                var shouldCompensate = ShouldCompensate(workflow, def, exceptionPointer);                var errorOption = (exceptionStep.ErrorBehavior ?? (shouldCompensate ? WorkflowErrorHandling.Compensate : def.DefaultErrorBehavior));                foreach (var handler in _errorHandlers.Where(x =&gt; x.Type == errorOption))                {                    handler.Handle(workflow, def, exceptionPointer, exceptionStep, exception, queue);                }            }        }                private bool ShouldCompensate(WorkflowInstance workflow, WorkflowDefinition def, ExecutionPointer currentPointer)        {            var scope = new Stack&lt;string&gt;(currentPointer.Scope);            scope.Push(currentPointer.Id);            while (scope.Count &gt; 0)            {                var pointerId = scope.Pop();                var pointer = workflow.ExecutionPointers.FindById(pointerId);                var step = def.Steps.FindById(pointer.StepId);                if ((step.CompensationStepId.HasValue) || (step.RevertChildrenAfterCompensation))                    return true;            }            return false;        }    }}</code></pre><h2 id="appfeatureprovider-%E6%A3%80%E6%9F%A5" tabindex="-1">AppFeatureProvider 检查</h2><h2 id="appconsts%E6%A3%80%E6%9F%A5" tabindex="-1">AppConsts检查</h2><h2 id="%E5%AF%B9%E6%AF%94web.core-%E4%B8%8Bcontrollers%E6%96%87%E4%BB%B6%E5%A4%B9%E4%B8%8B%E5%86%85%E5%AE%B9" tabindex="-1">对比Web.Core 下Controllers文件夹下内容</h2><p>特别是<code>FileController</code></p><h2 id="%E6%97%B6%E5%8C%BA%E8%AE%BE%E7%BD%AE" tabindex="-1">时区设置</h2><blockquote><p>CoreModule.cs</p></blockquote><pre><code class="language-csharp"> public class CoreModule : AbpModule    {        public override void PreInitialize()        {            Clock.Provider = ClockProviders.Utc;}            }            </code></pre><h2 id="%E6%B3%A8%E6%84%8F2.2%E5%8D%87%E7%BA%A7%E5%AF%BC%E8%87%B4%E7%9A%84linq%E8%AE%A1%E7%AE%97%E6%96%B9%E5%BC%8F%E5%87%BA%E9%94%99" tabindex="-1">注意2.2升级导致的linq计算方式出错</h2><p><a href="https://learn.microsoft.com/zh-cn/ef/core/what-is-new/ef-core-3.x/breaking-changes#linq-queries-are-no-longer-evaluated-on-the-client" target="_blank">https://learn.microsoft.com/zh-cn/ef/core/what-is-new/ef-core-3.x/breaking-changes#linq-queries-are-no-longer-evaluated-on-the-client</a><br /><img src="/upload/2023/05/image.png" alt="image" /></p><ol><li>对比Application.Authorization文件夹下文件</li><li>对比Application.Organizations文件夹下文件</li></ol><h2 id="%E6%A3%80%E6%9F%A5json%E8%BD%AC%E6%8D%A2%E5%BA%93" tabindex="-1">检查Json转换库</h2><p>System.Text.Json</p><pre><code class="language-csharp">JsonDocument.Parse(await response.Content.ReadAsStringAsync())var result = jsonDoc.RootElement;var errorCode = result.GetString(&quot;errcode&quot;);OAuthTokenResponse tokenstokens.Response.RootElement.GetString(&quot;code&quot;)</code></pre><h2 id="swagger%E9%94%99%E8%AF%AF%E5%A4%84%E7%90%86" tabindex="-1">Swagger错误处理</h2><pre><code class="language-csharp">                services.AddSwaggerGen(options =&gt;                {                    options.SwaggerDoc(&quot;v1&quot;,new Microsoft.OpenApi.Models.OpenApiInfo() { });                    options.AddSecurityDefinition(&quot;Bearer&quot;, new OpenApiSecurityScheme                    {                        Name = &quot;Authorization&quot;,                        Type = SecuritySchemeType.ApiKey,                        Scheme = &quot;Bearer&quot;,                        BearerFormat = &quot;JWT&quot;,                        In = ParameterLocation.Header,                        Description = &quot;JWT token Bearer&quot;                    });                    //Resolve conflicting schemaIds - yue.fei 20190723.                    //options.CustomSchemaIds(x =&gt; x.FullName);                    options.ResolveConflictingActions(apiDescriptions =&gt; apiDescriptions.First());                    IncludeXmlComments(options);                });                  private void IncludeXmlComments(SwaggerGenOptions options)        {            var xmlFiles = System.IO.Directory.GetFiles(AppContext.BaseDirectory, &quot;*.xml&quot;);            foreach (var file in xmlFiles)            {                               options.IncludeXmlComments(file);            }        }</code></pre>]]>
                    </description>
                    <pubDate>Sun, 30 Apr 2023 14:54:41 EDT</pubDate>
                </item>
                <item>
                    <title>
                        <![CDATA[mysql每日自动备份数据]]>
                    </title>
                    <link>https://wangyou233.wang/archives/151</link>
                    <description>
                            <![CDATA[<blockquote><p><a href="http://backup.sh" target="_blank">backup.sh</a></p></blockquote><pre><code class="language-bash">#!/bin/sh# 数据库名数组if [ $# -lt 1 ]; then  echo please provide database name  exit 1fi# 备份目录backupdir=~/works/dbusername=rootpwd=roottimestamp=&#96;date +%Y%m%d_%H%M%S_%Z&#96;today=&#96;date +%Y%m%d&#96;if [ ! -d $today ]; then  mkdir $backupdir/daily/$todayfifind $backupdir/daily/ -name &quot;*.gz&quot; -mtime +7 -exec rm {} \;for dbname in $@ ; do  mysqldump -u $username -p$pwd $dbname | gzip &gt; $backupdir/daily/$today/$dbname-$timestamp.sql.gzdone</code></pre><blockquote><p>授予权限</p></blockquote><pre><code class="language-bash">sudo chmod +x backup.sh</code></pre><blockquote><p>增加定时任务</p></blockquote><pre><code class="language-bash">#每日2点执行数据库备份0 2 * * * ~/works/backupdb.sh databaseName</code></pre>]]>
                    </description>
                    <pubDate>Wed, 01 Mar 2023 23:12:48 EST</pubDate>
                </item>
    </channel>
</rss>