mysqldbimport 顾名思义,导入。mysqldbexport 的反面。将mysqldbimport导出的数据导入到另一个数据库服务器上。
如果一个对象已经存在于目标服务器上,那么将先删除再导入的。文章源自运维生存时间-https://www.ttlsa.com/mysql/mysql-utilities-mysqldbimport/
跳过对象类型,可以使用–skip选项指定对象列表。这在为了提取某个特定对象情况下,非常有用。同样,跳过BLOB数据更新可以指定 –skip-blobs选项。文章源自运维生存时间-https://www.ttlsa.com/mysql/mysql-utilities-mysqldbimport/
指定输入的格式,与mysqldbexport导出的格式对应:
- sql (默认)以SQL语句方式输出。对于对象定义,以适当的 CREATE 和 GRANT 语句。 对于数据,以 INSERT 语句。(或者批量插入
--bulk-insert指定该选项。
. - grid网格化输出,类似于mysql命令行客户端输出
- csv逗号分隔格式输出
- raw_csv 输入一个简单的csv文件,包含用逗号分隔的行值。该文件可以包含行头信息。--table选项需要此格式。
- tab制表符格式输出
- vertical类似于mysql命令行客户端\G执行的输出
指定 –no-headers 选项,关闭CSV或tab显示对象的格式头部。文章源自运维生存时间-https://www.ttlsa.com/mysql/mysql-utilities-mysqldbimport/
指定–quiet选项,关闭所有反馈信息。文章源自运维生存时间-https://www.ttlsa.com/mysql/mysql-utilities-mysqldbimport/
默认情况下,创建的表的存储引起与原始表一样。可以使用--new-storage-engine选项来指定要使用的引擎。如果目标服务器支持该引擎,所有的表将使用该引擎的。文章源自运维生存时间-https://www.ttlsa.com/mysql/mysql-utilities-mysqldbimport/
如果目标服务器不支持原表所使用的存储引起,可以使用--default-storage-engine选项来指定默认使用的引擎。文章源自运维生存时间-https://www.ttlsa.com/mysql/mysql-utilities-mysqldbimport/
--new-storage-engine选项优先级高于 --default-storage-engine。文章源自运维生存时间-https://www.ttlsa.com/mysql/mysql-utilities-mysqldbimport/
如果要导入的数据库的服务器上启用了GTIDs(GTID_MODE = ON),如果mysqlexport导出的数据没有包含GTID报表,将会有警告信息产生。文章源自运维生存时间-https://www.ttlsa.com/mysql/mysql-utilities-mysqldbimport/
选项
Usage: mysqldbimport --server=user:pass@host:port:socket db1.csv db2.sql db3.grid mysqldbimport - import metadata and data from files Options: --version show program's version number and exit --help display a help message and exit --license display program's license and exit --server=SERVER connection information for the server in the form: <user>[:<password>]@<host>[:<port>][:<socket>] or <login-path>[:<port>][:<socket>] or <config- path>[<[group]>]. --ssl-ca=SSL_CA The path to a file that contains a list of trusted SSL CAs. --ssl-cert=SSL_CERT The name of the SSL certificate file to use for establishing a secure connection. --ssl-key=SSL_KEY The name of the SSL key file to use for establishing a secure connection. --character-set=CHARSET sets the client character set. The default is retrieved from the server variable 'character_set_client'. -f FORMAT, --format=FORMAT the input file format in either sql (default), grid, tab, csv, raw_csv or vertical format -i IMPORT_TYPE, --import=IMPORT_TYPE control the import of either 'data' = only the table data for the tables in the database list, 'definitions' = import only the definitions for the objects in the database list, or 'both' = import the metadata followed by the data (default: import definitions) -d, --drop-first drop database before importing. -b, --bulk-insert use bulk insert statements for data (default:False) -h, --no-headers files do not contain column headers (only applies to formats: tab, csv). --dryrun 预导入不执行,测试用。测试文件是否有效。 --table=TABLE destination table in the form: <db>.<table>. --skip-blobs do not import blob data. --skip-rpl do not execute replication commands. --skip-gtid do not execute the GTID_PURGED statements. --skip=SKIP_OBJECTS specify objects to skip in the operation in the form of a comma-separated list (no spaces). Valid values = tables, views, triggers, procedures, functions, events, grants, data, create_db -v, --verbose control how much information is displayed. e.g., -v = verbose, -vv = more verbose, -vvv = debug -q, --quiet turn off all messages for quiet execution. --new-storage-engine=NEW_ENGINE change all tables to use this storage engine if storage engine exists on the destination. --default-storage-engine=DEF_ENGINE change all tables to use this storage engine if the original storage engine does not exist on the destination. --multiprocess=MULTIPROCESS use multiprocessing, number of processes to use for concurrent execution. Special values: 0 (number of processes equal to the CPUs detected) and 1 (default - no concurrency). --autocommit use autocommit, by default autocommit is off and transactions are only committed once at the end of each imported file. --max-bulk-insert=MAX_BULK_INSERT maximum bulk insert size, by default 30000.
如果你的系统有多个处理器,可以并发执行。并发导入应用于文件级别,这意味着只有不同的文件可以同时执行的。文章源自运维生存时间-https://www.ttlsa.com/mysql/mysql-utilities-mysqldbimport/
实例
导入元数据文章源自运维生存时间-https://www.ttlsa.com/mysql/mysql-utilities-mysqldbimport/
shell> mysqldbimport --server=root@localhost --import=definitions \ --format=csv data.csv # Source on localhost: ... connected. # Importing definitions from data.csv. #...done.
批量插入语句导入文章源自运维生存时间-https://www.ttlsa.com/mysql/mysql-utilities-mysqldbimport/
shell> mysqldbimport --server=root@localhost --import=data \ --bulk-insert --format=csv data.csv # Source on localhost: ... connected. # Importing data from data.csv. #...done.
批量插入语句导入,导入定义数据和数据文章源自运维生存时间-https://www.ttlsa.com/mysql/mysql-utilities-mysqldbimport/
shell> mysqldbimport --server=root@localhost --import=both --bulk-insert --format=sql data.sql # Source on localhost: ... connected. # Importing definitions and data from data.sql. #...done.
下面来看看如何使用mysqldbexport 和 mysqldbimport 工具来创建新的从库。文章源自运维生存时间-https://www.ttlsa.com/mysql/mysql-utilities-mysqldbimport/ 文章源自运维生存时间-https://www.ttlsa.com/mysql/mysql-utilities-mysqldbimport/

评论