前言 针对Ubuntu 16.04
,汇总常用服务的搭建指南。
系统初始化
新买的ECS需要执行系统初始化
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 $ sudo apt update $ sudo apt dist-upgrade $ sudo apt autoremove $ sudo apt clean $ cat /etc/hosts 172.16.0.192 kftest-config01 $ cat /etc/hostname pg_1 $ reboot $ sudo fdisk -l Disk /dev/vdb: 1000 GiB, 1073741824000 bytes, 2097152000 sectors $ sudo fdisk -u /dev/vdb Command (m for help ): n ... 一路enter Command (m for help ): w $ sudo fdisk -lu /dev/vdb Device Boot Start End Sectors Size Id Type /dev/vdb1 2048 2097151999 2097149952 1000G 83 Linux $ sudo mkfs.ext4 /dev/vdb1 $ cp /etc/fstab /etc/fstab.bak $ echo /dev/vdb1 /data ext4 defaults 0 0 >> /etc/fstab $ sudo mkdir /data $ sudo mount /dev/vdb1 /data $ df -h /dev/vdb1 985G 72M 935G 1% /data
Postgresql 安装Postgresql 1 2 3 4 5 6 7 $ sudo apt install ca-certificates $ RELEASE=$(lsb_release -cs) $ echo "deb https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/apt/ ${RELEASE} " -pgdg main | sudo tee /etc/apt/sources.list.d/pgdg.list $ wget --quiet -O - https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/apt/ACCC4CF8.asc | sudo apt-key add - $ sudo apt-get update $ sudo apt-get install postgresql-9.6
修改配置文件 1 2 3 4 5 6 7 8 9 10 11 12 13 $ sudo vim /etc/postgresql/9.6/main/postgresql.conf listen_addresses = '*' max_connections = 1000 logging_collector = on data_directory = '/data/postgresql' $ sudo vim /etc/postgresql/9.6/main/pg_hba.conf host all all 0.0.0.0/0 md5 $ sudo service postgresql restart
修改数据目录 参考:https://www.cnblogs.com/easonjim/p/9052836.html
修改默认用户Postgres的密码 1 2 3 4 $ sudo -u postgres psql $ exit
搭建集群(可选)
主机
ip
Master节点
10.10.10.10
Slave节点
10.10.10.9
Master节点和Slave节点分别按照上述步骤安装完成postgres
后,开始搭建集群。
master节点:
修改配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 $ sudo vi /etc/postgresql/9.6/main/postgresql.conf listen_addresses = '*' wal_level = hot_standby archive_mode = on archive_command = 'test ! -f /var/lib/postgresql/9.6/archive/%f && cp %p /var/lib/postgresql/9.6/archive/%f' max_wal_senders = 16 wal_keep_segments = 100 hot_standby = on logging_collector = on ## 更多参考 https://www.postgresql.org/docs/current/static/runtime-config.html $ sudo vi /etc/postgresql/9.6/main/pg_hba.conf host all all 10.0.0.0/8 md5 host replication repuser 10.0.0.0/8 md5 ## 更多参考 https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html $ sudo -upostgres mkdir /var/lib/postgresql/9.6/archive $ sudo chmod 0700 /var/lib/postgresql/9.6/archive $ sudo service postgresql restart
创建工作账户 repuser
1 2 3 4 5 $ sudo -upostgres createuser --replication repuser $ sudo -upostgres psql postgres= <password>
slave节点:
先停止服务
1 $ sudo service postgresql stop
由master节点导入数据(postgres 免密码登录 repuser role)
1 2 3 4 5 6 7 $ sudo -upostgres vi /var/lib/postgresql/.pgpass 10.10.10.10:5432:*:repuser:<password> 127.0.0.1:5432:*:repuser:<password> $ sudo chmod 0600 /var/lib/postgresql/.pgpass $ sudo mv /var/lib/postgresql/9.6/main /var/lib/postgresql/9.6/main.bak $ sudo -upostgres pg_basebackup -D /var/lib/postgresql/9.6/main -F p -X stream -v -R -h 10.10.10.10 -p 5432 -U repuser
修改配置
1 2 3 4 5 6 7 8 9 $ sudo vi /var/lib/postgresql/9.6/main/recovery.conf standby_mode = 'on' primary_conninfo = 'user=repuser host=10.10.10.10 port=5432' trigger_file = 'failover.now' $ sudo vi /etc/postgresql/9.6/main/postgresql.conf hot_standby = on
重启并检查服务
1 2 3 4 5 6 7 8 9 $ sudo service postgresql start $ sudo service postgresql status ... Active: active (exited) $ sudo -upostgres psql psql (9.6.12) ...
测试集群 在master节点进行增删改操作,对照看slave节点是否能够从master节点复制操作
常用命令 1 2 3 $ sudo service postgresql start $ sudo service postgresql status $ sudo service postgresql restart
👉 PG数据库常用命令
Redis 安装Redis(单机) 1 2 3 4 5 6 $ sudo apt-get install redis-server $ sudo vim /etc/redis/redis.conf protected-mode no $ sudo systemctl restart redis-server
安装Redis(集群)
主机
ip
redis-server
sentinel
node01
10.10.10.5
主
√
node02
10.10.10.4
从
√
node03
10.10.10.6
从
√
安装 Redis-Server 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 node01: $ sudo apt-get install redis-server $ sudo vi /etc/redis/redis.conf bind : 10.10.10.5$ sudo service redis-server restart node02: $ sudo apt-get install redis-server $ sudo vi /etc/redis/redis.conf bind : 10.10.10.4slaveof 10.10.10.5 $ sudo service redis-server restart node03 同node02
测试主从同步 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 node01: $ redis-cli -h 10.10.10.5 -p 6379 10.10.10.5:6379>info .... role:master connected_slaves:2 slave0:ip=10.10.10.4,port=6379,state=online,offset=99,lag=0 slave1:ip=10.10.10.6,port=6379,state=online,offset=99,lag=1 master_repl_offset:99 .... 10.10.10.5:6379>set testkey testvalue OK 10.10.10.5:6379>get testkey "testvalue" node02: $ redis-cli -h 10.10.10.4 -p 6379 10.9.8.203:6379>info ... role:slave master_host:10.10.10.5 master_port:6379 master_link_status:up ... 10.10.10.4:6379>get testkey "testvalue"
配置 Sentinel(可选)
一个稳健的 Redis Sentinel
集群,应该使用至少 三个 Sentinel
实例,并且保证将这些实例放到 不同的机器 上,甚至不同的 物理区域 。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 $ sudo wget http://download.redis.io/redis-stable/sentinel.conf -O /etc/redis/sentinel.conf $ sudo chown redis:redis /etc/redis/sentinel.conf $ sudo vi /etc/redis/sentinel.conf sentinel monitor mymaster 10.10.10.5 6379 2 sentinel down-after-milliseconds mymaster 60000 sentinel parallel-syncs mymaster 1 sentinel failover-timeout mymaster 180000 $ sudo vi /etc/redis/sentinel.service [Unit] Documentation=http://redis.io/topics/sentinel [Service] ExecStart=/usr/bin/redis-server /etc/redis/sentinel.conf --sentinel User=redis Group=redis [Install] WantedBy=multi-user.target $ sudo ln -s /etc/redis/sentinel.service /lib/systemd/system/sentinel.service $ sudo systemctl enable sentinel.service $ sudo service sentinel start node02 node03 sentinel 配置同node01,所有节点配置完成,再继续下一步
配置好sentinel
之后,redis.conf
和sentinel.conf
都由sentinel
接管;sentinel
监控主节点发生改变的话,会更改对应的配置文件sentinel.conf
和redis.conf
。
测试Sentinel监控、通知、自动故障转移 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 node01,node02,node03: $ redis-cli -h 10.10.10.5 -p 26379 10.10.10.5:26379> info redis_version:3.0.6 ... config_file:/etc/redis/sentinel.conf sentinel_masters:1 sentinel_tilt:0 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 master0:name=mymaster,status=ok,address=10.10.10.5:6379,slaves=2,sentinels=1 $ redis-cli -h 10.10.10.4 -p 26379 10.10.10.5:26379> sentinel master mymaster 1) "name" 2) "mymaster" 3) "ip" 4) "10.10.10.5" 5) "port" 6) "6379" ... node01: $ systemctl stop redis-server.service node02: $ redis-cli -h 10.10.10.4 -p 26379 10.10.10.4:26379> info ... sentinel_masters:1 sentinel_tilt:0 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 master0:name=mymaster,status=ok,address=10.10.10.6:6379,slaves=2,sentinels=3 $ redis-cli -h 10.10.10.6 -p 6379 10.10.10.6:6379> info role:master connected_slaves:1 slave0:ip=10.10.10.4,port=6379,state=online,offset=19874,lag=0 master_repl_offset:19874 ... node01: $ systemctl start redis-server $ redis-cli -h 10.10.10.5 -p 6379 10.10.10.5:6379> info ... role:slave master_host:10.10.10.6 master_port:6379 master_link_status:up ... $ redis-cli -h 10.10.10.5 -p 26379 10.10.10.5:26379> info ... sentinel_masters:1 sentinel_tilt:0 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 master0:name=mymaster,status=ok,address=10.10.10.6:6379,slaves=2,sentinels=3
客户端连接Sentinel 配置完sentinel,客户端连接方式就改变了,拿Redisson 举例,需要增加以下配置,并删除单机模式下spring.redis.host
配置,端口号改成哨兵的端口号
1 2 spring.redis.sentinel.master=mymaster spring.redis.sentinel.nodes=10.10.10.4:26379,10.10.10.5:26379,10.10.10.6:26379
引入的jar是
1 compile "org.redisson:redisson-spring-boot-starter:3.9.1"
配置类所在位置:
1 org.springframework.boot.autoconfigure.data.redis.RedisProperties.Sentinel
常用命令 1 2 3 4 $ sudo systemctl start redis $ sudo systemctl enable redis $ sudo systemctl restart $ sudo systemctl stop redis
常见问题
有时可能会遇到关闭或重启不了,这时候可以使用redis-cli
提供的命令行来强制关闭
1 2 3 $ redis-cli -h 10.10.10.5 -p 6379 10.10.10.5:6379> shutdown nosave
Redis is configured to save RDB snapshots, but is currently not able to persist on disk.
Redis被配置为保存数据库快照,但它目前不能持久化到硬盘。
1 2 3 4 5 $ vim /etc/sysctl.conf vm.overcommit_memory=1 $ sudo sysctl -p /etc/sysctl.conf
如果改好后,还不行,就需要查看下Redis的dump文件配置是不是被更改了
1 2 3 4 5 6 7 $ redis-cli -h 10.10.10.5 10.10.10.5:6379> CONFIG GET dbfilename 1) "dbfilename" 2) ".rdb" 10.10.10.5:6379> CONFIG GET dir 1) "dir" 2) "/var/spool/cron"
以上配置,如果不是自己更改的,则可怀疑是被黑客篡改了
检查Redis端口是否在公网开放,如果是,立马关闭
设置Redis访问密码
恢复Redis默认配置
1 2 3 4 5 6 7 $ vim /etc/redis/redis.conf dbfilename "dump.rdb" dir "/var/lib/redis" $ service redis-server restart node01 node02 node03均按此修改并重启
Consul 安装Consul(单机) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 $ sudo mkdir -p /data/consul/{current/{bin,etc},data} $ sudo wget https://releases.hashicorp.com/consul/1.5.3/consul_1.5.3_linux_amd64.zip -O /data/consul/consul_1.5.3_linux_amd64.zip $ sudo apt-get install unzip $ sudo unzip /data/consul/consul_1.5.3_linux_amd64.zip -d /data/consul/current/bin $ sudo vi /data/consul/current/etc/consul.json { "bootstrap" : true , "datacenter" : "test-datacenter" , "data_dir" : "/data/consul/data" , "log_level" : "INFO" , "server" : true , "client_addr" : "0.0.0.0" , "ui" : true , "start_join" : ["ip:8301" ], "enable_syslog" : true } $ sudo ln -s /data/consul/current/etc /data/consul/etc $ sudo vi /etc/systemd/system/consul.service [Unit] Description=consul service [Service] ExecStart=/data/consul/current/bin/consul agent -bind ={ip} -config-dir /data/consul/etc/consul.json User=root [Install] WantedBy=multi-user.target $ sudo systemctl enable consul.service $ sudo systemctl start consul.service
安装Consul(集群)
主机
ip
node01
10.10.10.5
node02
10.10.10.4
node03
10.10.10.6
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 node01 node02 node03 $ sudo mkdir -p /data/consul/{current/{bin,etc},data} $ sudo wget https://releases.hashicorp.com/consul/1.5.3/consul_1.5.3_linux_amd64.zip -O /data/consul/consul_1.5.3_linux_amd64.zip $ sudo apt-get install unzip $ sudo unzip /data/consul/consul_1.5.3_linux_amd64.zip -d /data/consul/current/bin $ sudo vi /data/consul/current/etc/consul.json { "datacenter" : "roc-datacenter" , "data_dir" : "/data/consul/data" , "log_level" : "INFO" , "server" : true , "bootstrap_expect" : 3, "client_addr" : "10.10.10.4" , "ui" : true , "start_join" : ["10.10.10.4:8301" ,"10.10.10.5:8301" ,"10.10.10.6:8301" ], "enable_syslog" : true } $ sudo ln -s /data/consul/current/etc /data/consul/etc $ sudo vi /etc/systemd/system/consul.service [Unit] Description=consul service [Service] ExecStart=/data/consul/current/bin/consul agent -config-dir /data/consul/etc/consul.json User=root [Install] WantedBy=multi-user.target $ sudo systemctl enable consul.service $ sudo systemctl start consul.service
需要开放的端口:8300, 8301, 8500,如果网络不通,则子节点将无法join到主节点,可能会出现
1 failed to sync remote state: No cluster leader
无法选举出leader,其实是节点之间无法通信,如果通信正常,启动之时所有节点会自动推举出leader。
常用命令 1 2 3 4 5 $ sudo systemctl start consul.service $ sudo systemctl stop consul.service $ sudo systemctl restart consul.service
Nginx 安装Nginx 1 2 3 4 5 $ echo -e "deb http://nginx.org/packages/ubuntu/ $(lsb_release -cs) nginx\ndeb-src http://nginx.org/packages/ubuntu/ $(lsb_release -cs) nginx" | sudo tee /etc/apt/sources.list.d/nginx.list $ wget -O- http://nginx.org/keys/nginx_signing.key | sudo apt-key add - $ sudo apt-get update $ sudo apt-get install nginx
常用命令 1 2 3 4 5 $ sudo service nginx start $ sudo service nginx stop $ sudo service nginx restart $ sudo service nginx reload
Cassandra集群
主机
IP
cassandra-1
192.168.0.1
cassandra-2
192.168.0.2
安装Cassandra 1 2 3 4 5 6 $ echo "deb http://www.apache.org/dist/cassandra/debian 39x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list $ curl https://www.apache.org/dist/cassandra/KEYS | sudo apt-key add - $ sudo apt update $ sudo apt -y install cassandra $ sudo apt install openjdk-8-jdk-headless
修改配置文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 $ sudo vi /etc/cassandra/cassandra.yaml seed_provider: - seeds: "192.168.0.1,192.168.0.2" concurrent_writes: 64 concurrent_counter_writes: 64 concurrent_counter_writes: 64 concurrent_materialized_view_writes: 64 compaction_throughput_mb_per_sec: 128 file_cache_size_in_mb: 1024 buffer_pool_use_heap_if_exhausted: true disk_optimization_strategy: spinning listen_interface: eth0 rpc_interface: eth0 enable_user_defined_functions: true auto_bootstrap: false $ sudo vi /etc/cassandra/jvm.options -XX:+UseG1GC -XX:G1RSetUpdatingPauseTimePercent=5 -XX:MaxGCPauseMillis=500 -XX:InitiatingHeapOccupancyPercent=70 -XX:ParallelGCThreads=16 -XX:ConcGCThreads=16 $ sudo vi /etc/cassandra/cassandra-env.sh JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=192.168.0.1" if [ "x$LOCAL_JMX " = "x" ]; then LOCAL_JMX=no fi JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=false" $ sudo systemctl stop cassandra
迁移配置导数据盘(可选) 1 2 3 4 $ sudo mv /var/lib/cassandra /data/cassandra $ sudo ln -s /data/cassandra /var/lib/cassandra $ sudo systemctl start cassandra
集群内其余机器,重复上述步骤,修改对应IP
Zookeeper集群
主机
IP
zk-01
192.168.0.1
zk-02
192.168.0.2
zk-03
192.168.0.3
安装Zookeeper 1 $ sudo apt install zookeeperd
修改配置文件 1 2 3 4 5 6 7 8 $ sudo vim /etc/zookeeper/conf/zoo.cfg server.1=192.168.0.1:2888:3888 server.2=192.168.0.2:2888:3888 server.3=192.168.0.3:2888:3888 $ sudo vim /etc/zookeeper/conf/myid 1 $ sudo systemctl restart zookeeper
1 2 3 4 5 6 7 8 9 $ cd /data && wget https://github.com/zifangsky/zkui/releases/download/v2.0/zkui-2.0.zip $ sudo unzip zkui-2.0.zip $ sudo vi /data/zkui/config.cfg zkServer=192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181 userSet = {"users" : [{ "username" :"<username>" , "password" :"<password>" ,"role" : "ADMIN" },{ "username" :"appconfig" , "password" :"appconfig" ,"role" : "USER" }]} $ cd /data/zkui && sudo bash start.sh
集群内其余机器,重复上述步骤
Kafka集群
主机
IP
zk-01
192.168.0.1
zk-02
192.168.0.2
zk-03
192.168.0.3
安装Kafka 1 2 3 4 5 6 7 8 $ sudo mkdir /data/kafka && cd ~ $ wget "http://www-eu.apache.org/dist/kafka/1.0.1/kafka_2.12-1.0.1.tgz" $ curl http://kafka.apache.org/KEYS | gpg --import $ wget https://dist.apache.org/repos/dist/release/kafka/1.0.1/kafka_2.12-1.0.1.tgz.asc $ gpg --verify kafka_2.12-1.0.1.tgz.asc kafka_2.12-1.0.1.tgz $ sudo tar -xvzf kafka_2.12-1.0.1.tgz --directory /data/kafka --strip-components 1 $ sudo rm -rf kafka_2.12-1.0.1.tgz kafka_2.12-1.0.1.tgz.asc
修改配置文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 $ sudo mkdir /data/kafka-logs $ sudo cp /data/kafka/config/server.properties{,.bak} $ sudo vim /data/kafka/config/server.properties broker.id=0 listeners=PLAINTEXT://0.0.0.0:9092 advertised.listeners=PLAINTEXT://<ip>:9092 delete.topic.enable = true leader.imbalance.check.interval.seconds=5 leader.imbalance.per.broker.percentage=1 log.dirs=/data/kafka-logs offsets.topic.replication.factor=3 log.retention.hours=72 log.segment.bytes=1073741824 zookeeper.connect=192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181 $ sudo vim /data/kafka/bin/kafka-server-start.sh export JMX_PORT=12345
注册为Systemd服务 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 $ sudo adduser --system --no-create-home --disabled-password --disabled-login kafka $ sudo chown -R kafka:nogroup /data/kafka $ sudo chown -R kafka:nogroup /data/kafka-logs $ sudo vim /etc/systemd/system/kafka.service [Unit] Description=High-available, distributed message broker After=network.target [Service] User=kafka ExecStart=/data/kafka/bin/kafka-server-start.sh /data/kafka/config/server.properties [Install] WantedBy=multi-user.target $ sudo systemctl enable kafka.service $ sudo systemctl start kafka.service
测试Kafka的使用(可选) 1 2 3 4 5 6 7 8 9 $ /data/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test $ /data/kafka/bin/kafka-topics.sh --list --zookeeper localhost:2181 $ /data/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test > Hello World $ /data/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning Hello World
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 $ cd /data & sudo wget https://github.com/yahoo/kafka-manager/archive/1.3.3.17.zip $ sudo unzip kafka-manager-1.3.3.17.zip $ sudo mv kafka-manager-1.3.3.17 kafka-manager $ sudo chown -R kafka:nogroup /data/kafka-manager $ sudo vim /data/kafka-manager/conf/application.conf kafka-manager.zkhosts="192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181" basicAuthentication.enabled=true basicAuthentication.username="<username>" basicAuthentication.password="<password>" $ sudo vim /etc/systemd/system/kafka-manager.service [Unit] Description=High-available, distributed message broker manager After=network.target [Service] User=kafka ExecStart=/data/kafka-manager/bin/kafka-manager [Install] WantedBy=multi-user.target $ sudo systemctl enable kafka-manager.service $ sudo systemctl start kafka-manager.service
Mysql 安装Mysql 1 2 $ sudo apt-get update $ sudo apt-get install mysql-server
在安装过程中,系统将提示您创建root密码。请务必记住root密码
配置Mysql 运行安全脚本
1 $ mysql_secure_installation
值得一提的是,Disallow root login remotely?
,如果你需要使用root账号进行远程连接,请选择No
验证 接下来测试下是否安装成功了
运行状态
1 2 3 4 5 6 7 8 9 10 $ systemctl status mysql.service ● mysql.service - MySQL Community Server Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2019-07-18 23:38:43 PDT; 11min ago Main PID: 2948 (mysqld) Tasks: 28 Memory: 142.6M CPU: 545ms CGroup: /system.slice/mysql.service └─2948 /usr/sbin/mysqld
登录查看版本
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 $ mysqladmin -p -u root version mysqladmin Ver 8.42 Distrib 5.7.26, for Linux on x86_64 Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Server version 5.7.26-0ubuntu0.16.04.1 Protocol version 10 Connection Localhost via UNIX socket UNIX socket /var/run/mysqld/mysqld.sock Uptime: 12 min 18 sec Threads: 1 Questions: 36 Slow queries: 0 Opens: 121 Flush tables: 1 Open tables: 40 Queries per second avg: 0.048
到这里,Mysql安装完成!
参考