首页
归档
留言板
时光轴
推荐
Cloud
图床
导航
Search
1
openstact 基础环境安装 (手动版)
298 阅读
2
域名
191 阅读
3
Mariadb 主从复制&读写分离
186 阅读
4
查询化妆品许可证信息
179 阅读
5
模拟登录古诗文网
178 阅读
Python
Linux
随笔
mysql
openstack
登录
Search
标签搜索
Pike
linux
python
爬虫
mysql
openstack
Essay
Ansible
docker
Zabbix
Internet
Redis
1+X
Hyper-V
jenkins
Acha
累计撰写
65
篇文章
累计收到
1
条评论
首页
栏目
Python
Linux
随笔
mysql
openstack
页面
归档
留言板
时光轴
推荐
Cloud
图床
导航
搜索到
65
篇与
Acha
的结果
2022-11-21
Jenkins 迁移
Jenkins 迁移 [root@jenkins ~]# rpm -ql jenkins /etc/init.d/jenkins /etc/logrotate.d/jenkins /etc/sysconfig/jenkins /usr/bin/jenkins /usr/lib/systemd/system/jenkins.service /usr/sbin/rcjenkins /usr/share/java/jenkins.war /usr/share/jenkins /usr/share/jenkins/migrate /var/cache/jenkins /var/lib/jenkins /var/log/jenkins 备份 1、停止 jenkins [root@jenkins ~]# systemctl stop jenkins [root@jenkins ~]# ps -ef | grep jenkins root 1818 1679 0 10:40 pts/0 00:00:00 grep --color=auto jenkins 2、备份 [root@jenkins ~]# grep ^JENKINS_HOME /etc/sysconfig/jenkins JENKINS_HOME="/var/lib/jenkins" [root@jenkins ~]# cd /var/lib [root@jenkins lib]# tar cfz jenkins_home.tar.gz jenkins [root@jenkins lib]# du -sh jenkins_home.tar.gz 182M jenkins_home.tar.gz [root@jenkins lib]# mv jenkins_home.tar.gz /root/ [root@jenkins ~]# cp -a /usr/lib/systemd/system/jenkins.service . [root@jenkins ~]# cp -a /etc/sysconfig/jenkins . [root@jenkins ~]# tar cfz jenkins_backup.tar.gz jenkins* [root@jenkins ~]# ll total 553540 -rw-------. 1 root root 1456 Nov 8 09:06 anaconda-ks.cfg -rwxr-xr-x. 1 root root 1012 Nov 8 09:14 init.sh -rw-------. 1 root root 4034 Nov 8 09:19 jenkins -rw-r--r--. 1 root root 93270402 Nov 2 07:55 jenkins-2.376-1.1.noarch.rpm -rw-r--r-- 1 root root 283331868 Nov 21 10:49 jenkins_backup.tar.gz -rw-r--r-- 1 root root 190196115 Nov 21 10:41 jenkins_home.tar.gz -rw-r--r--. 1 root root 5627 Nov 8 13:39 jenkins.service [root@jenkins ~]# du -sh jenkins_backup.tar.gz 271M jenkins_backup.tar.gz [root@jenkins ~]# sz jenkins_backup.tar.gz rz zmodem trl+C ȡ 正在传输 jenkins_backup.tar.gz... 100% 276691 KB 17293 KB/ 00:00:16 0 还原 1、java 环境 [root@thinkpad-e490 ~]# yum install -y java-11-openjdk java-11-openjdk-devel [root@thinkpad-e490 ~]# java --version openjdk 11.0.17 2022-10-18 LTS OpenJDK Runtime Environment (Red_Hat-11.0.17.0.8-2.el7_9) (build 11.0.17+8-LTS) OpenJDK 64-Bit Server VM (Red_Hat-11.0.17.0.8-2.el7_9) (build 11.0.17+8-LTS, mixed mode, sharing) 2、还原 [root@thinkpad-e490 ~]# rz rz waiting to receive. zmodem trl+C ȡ 正在传输 jenkins_backup.tar.gz... 100% 276691 KB 39527 KB/ 00:00:07 0 [root@thinkpad-e490 ~]# tar xfz jenkins_backup.tar.gz [root@thinkpad-e490 ~]# ll total 553536 -rw-------. 1 root root 1542 Nov 16 17:47 anaconda-ks.cfg -rw------- 1 root root 4034 Nov 8 09:19 jenkins -rw-r--r-- 1 root root 93270402 Nov 2 07:55 jenkins-2.376-1.1.noarch.rpm -rw-r--r-- 1 root root 283331868 Nov 21 10:49 jenkins_backup.tar.gz -rw-r--r-- 1 root root 190196115 Nov 21 10:41 jenkins_home.tar.gz -rw-r--r-- 1 root root 5627 Nov 8 13:39 jenkins.service [root@thinkpad-e490 ~]# rpm -ivh jenkins-2.376-1.1.noarch.rpm warning: jenkins-2.376-1.1.noarch.rpm: Header V4 RSA/SHA512 Signature, key ID 45f2c3d5: NOKEY Preparing... ################################# [100%] Updating / installing... 1:jenkins-2.376-1.1 ################################# [100%] [root@thinkpad-e490 ~]# mv jenkins /etc/sysconfig/jenkins mv: overwrite ‘/etc/sysconfig/jenkins’? y [root@thinkpad-e490 ~]# mv jenkins.service /usr/lib/systemd/system/jenkins.service mv: overwrite ‘/usr/lib/systemd/system/jenkins.service’? y [root@thinkpad-e490 ~]# tar xfz jenkins_home.tar.gz -C /var/lib/ 3、启动 [root@thinkpad-e490 ~]# systemctl start jenkins [root@thinkpad-e490 ~]# systemctl status jenkins ● jenkins.service - Jenkins Continuous Integration Server Loaded: loaded (/usr/lib/systemd/system/jenkins.service; disabled; vendor preset: disabled) Active: active (running) since Mon 2022-11-21 10:57:43 CST; 1s ago Main PID: 1861 (java) CGroup: /system.slice/jenkins.service └─1861 /usr/bin/java -Djava.awt.headless=true -jar /usr/share/java/jenkins.war --webroot=%C/jenkins/war ... Nov 21 10:57:42 thinkpad-e490 jenkins[1861]: 2022-11-21 02:57:42.968+0000 [id=34] INFO hudson.sl...t sdk Nov 21 10:57:42 thinkpad-e490 jenkins[1861]: 2022-11-21 02:57:42.980+0000 [id=32] INFO jenkins.I...oaded Nov 21 10:57:42 thinkpad-e490 jenkins[1861]: 2022-11-21 02:57:42.982+0000 [id=32] INFO jenkins.I...apted Nov 21 10:57:43 thinkpad-e490 jenkins[1861]: 2022-11-21 02:57:43.019+0000 [id=33] INFO jenkins.I... jobs Nov 21 10:57:43 thinkpad-e490 jenkins[1861]: 2022-11-21 02:57:43.027+0000 [id=46] INFO jenkins.I...dated Nov 21 10:57:43 thinkpad-e490 jenkins[1861]: 2022-11-21 02:57:43.059+0000 [id=64] INFO hudson.ut...erver Nov 21 10:57:43 thinkpad-e490 jenkins[1861]: 2022-11-21 02:57:43.227+0000 [id=33] INFO o.j.main....46087 Nov 21 10:57:43 thinkpad-e490 jenkins[1861]: 2022-11-21 02:57:43.227+0000 [id=43] INFO jenkins.I...ation Nov 21 10:57:43 thinkpad-e490 jenkins[1861]: 2022-11-21 02:57:43.283+0000 [id=24] INFO hudson.li...nning Nov 21 10:57:43 thinkpad-e490 systemd[1]: Started Jenkins Continuous Integration Server. Hint: Some lines were ellipsized, use -l to show in full. [root@thinkpad-e490 ~]# ps -ef | grep jenkins root 1861 1 99 10:57 ? 00:00:44 /usr/bin/java -Djava.awt.headless=true -jar /usr/share/java/jenkins.war --webroot=%C/jenkins/war --httpPort=8080 root 2025 1539 0 10:57 pts/0 00:00:00 grep --color=auto jenkins [root@thinkpad-e490 ~]# free -h total used free shared buff/cache available Mem: 7.5G 896M 5.2G 9.4M 1.4G 6.4G Swap: 7.8G 0B 7.8G 4、访问
2022年11月21日
30 阅读
0 评论
0 点赞
2022-09-08
Typecho 1.2.0 部署
Typecho 1.2.0 部署 {bilibili bvid="BV15e411u7gH" page=""/} 环境 CentOS7.9 Nginx/1.20.1 PHP 7.2.34 mysql-5.7.37 初始化系统 curl -O file.youto.club/Script/init.sh sh init.sh blog [root@blog nginx]# cat /script/init.sh #!/usr/bin/bash echo "=-=-=- Action -=-=-=" hostnamectl set-hostname $1 echo "# ==> Hostname: " `hostname` systemctl stop firewalld && systemctl disable firewalld >/dev/null 2>&1 setenforce 0 >/dev/null 2>&1 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config echo "# ==> Stop Firewalld && Selinux" mkdir /etc/yum.repos.d/bak -p mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo >/dev/null 2>&1 curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo >/dev/null 2>&1 yum repolist | grep repolist echo "# ==>repo" yum install -y lrzsz tree wget vim net-tools unzip bash-completion git >/dev/null 2>&1 sed -i "s/\#UseDNS no/UseDNS yes/g" /etc/ssh/sshd_config echo "# ==> SSH " echo "=-=-=- Done -=-=-=" 准备软件包 mysql-5.7.37-1.el7.x86_64.rpm.tar nginx-1.20.1.tar.gz typecho.zip mkdir /{repo,software,script} # MySQl wget http://file.youto.club/Repo/mysql-5.7.37-1.el7.x86_64.rpm.tar tar xf mysql-5.7.37-1.el7.x86_64.rpm.tar -c /repo ls /repo/mysql57_2009_repo/ vim /etc/yum.repos.d/mysql_local.repo yum repolist # PHP72 yum install -y http://rpms.remirepo.net/enterprise/remi-release-7.rpm yum install -y yum-utils yum-config-manager --enable remi-php72 # typecho wget https://github.com/typecho/typecho/releases/latest/download/typecho.zip mv typecho.zip /software/ [root@blog nginx]# cat /etc/yum.repos.d/mysql_local.repo [mysql57] name=mysql57 baseurl=file:///repo/mysql57_2009_repo/ gpgcheck=0 enabled=1 MySQL yum install -y mysql-server systemctl enable mysqld systemctl start mysqld grep pass /var/log/mysqld.log mysql -uroot -p [root@blog nginx]# mysql -uroot -p mysql> alter user user() identified by 'Admin@123'; Query OK, 0 rows affected (0.00 sec) mysql> create database typecho; Query OK, 1 row affected (0.00 sec) mysql> GRANT ALL ON typecho.* TO 'typecho'@'localhost' IDENTIFIED BY 'Abc@1234'; Query OK, 0 rows affected, 1 warning (0.00 sec) PHP yum install -y php72 php72 -v yum install -y php72-php-fpm php72-php-mbstring php72-php-mysqlnd systemctl enable php72-php-fpm.service systemctl start php72-php-fpm.service netstat -lntp rpm -ql php72-php-fpm vim /etc/opt/remi/php72/php-fpm.d/www.conf [root@blog nginx]# grep nginx /etc/opt/remi/php72/php-fpm.d/www.conf user = nginx group = nginx Nginx yum install -y nginx wget https://nginx.org/download/nginx-1.20.1.tar.gz mkdir /html unzip /software/typecho.zip cd /etc/nginx/ grep -Ev "#|^$" nginx.conf.default > nginx.conf vim nginx.conf mkdir -p /var/log/blog chown -R nginx.nginx /var/log/blog nginx -t chown -R nginx.nginx /html/ systemctl start nginx # cat /etc/nginx/nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; root /html; index index.html index.htm index.php; if (!-e $request_filename) { rewrite ^(.*)$ /index.php$1 last; } location ~ .*\.php(\/.*)*$ { include fastcgi.conf; fastcgi_pass 127.0.0.1:9000; } access_log /html/access.log combined; } } 访问
2022年09月08日
105 阅读
0 评论
0 点赞
2022-07-19
OpenStack-Pike 搭建之Cinder(七)
概述 OpenStack块存储服务(cinder)为虚拟机添加持久的存储,块存储提供一个基础设施为了管理卷,以及和OpenStack计算服务交互,为实例提供卷。此服务也会激活管理卷的快照和卷类型的功能。 块存储服务通常包含下列组件: cinder-api 接受API请求,并将其路由到cinder-volume执行。 cinder-volume 与块存储服务和例如cinder-scheduler的进程进行直接交互。它也可以与这些进程通过一个消息队列进行交互。cinder-volume服务响应送到块存储服务的读写请求来维持状态。它也可以和多种存储提供者在驱动架构下进行交互。 cinder-scheduler守护进程 选择最优存储提供节点来创建卷。其与nova-scheduler组件类似。 cinder-backup daemon cinder-backup服务提供任何种类备份卷到一个备份存储提供者。就像cinder-volume服务,它与多种存储提供者在驱动架构下进行交互。 消息队列 在块存储的进程之间路由信息。 安装并配置控制节点 先决条件 1、创建数据库 root 用户连接到数据库服务器 mysql -u root -p000000 创建 cinder 数据库 CREATE DATABASE cinder; 允许 cinder 数据库的cinder用户访问权限: GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '000000'; GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '000000'; 2、获取 admin 凭证 . admin-openrc 3、创建服务证书 创建一个 cinder 用户 openstack user create --domain default --password 000000 cinder 添加 admin 角色到 cinder 用户上 openstack role add --project service --user cinder admin 创建服务实体 openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 创建服务API openstack endpoint create --region RegionOne \ volumev2 public http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne \ volumev2 internal http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne \ volumev2 admin http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne \ volumev3 public http://controller:8776/v3/%\(project_id\)s openstack endpoint create --region RegionOne \ volumev3 internal http://controller:8776/v3/%\(project_id\)s openstack endpoint create --region RegionOne \ volumev3 admin http://controller:8776/v3/%\(project_id\)s {collapse} {collapse-item label="CMD"} [root@openstack ~]# mysql -u root -p000000 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 379 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE cinder; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> exit Bye [root@openstack ~]# . admin-openrc [root@openstack ~]# openstack user create --domain default --password 000000 cinder +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 15c1c62c21f543d984563abe5c063726 | | name | cinder | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@openstack ~]# openstack role add --project service --user cinder admin [root@openstack ~]# openstack service create --name cinderv2 \ > --description "OpenStack Block Storage" volumev2 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | | id | 052ab471e8ed4e5ca687bd73537935b5 | | name | cinderv2 | | type | volumev2 | +-------------+----------------------------------+ [root@openstack ~]# openstack service create --name cinderv3 \ > --description "OpenStack Block Storage" volumev3 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | | id | af52c95327614d1e9fb70286fcb552ea | | name | cinderv3 | | type | volumev3 | +-------------+----------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev2 public http://controller:8776/v2/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 2c3e428aa796442696e7f5919175c1e2 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 052ab471e8ed4e5ca687bd73537935b5 | | service_name | cinderv2 | | service_type | volumev2 | | url | http://controller:8776/v2/%(project_id)s | +--------------+------------------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev2 internal http://controller:8776/v2/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | c25695b19d1b4e33a9ea2d4c023b8732 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 052ab471e8ed4e5ca687bd73537935b5 | | service_name | cinderv2 | | service_type | volumev2 | | url | http://controller:8776/v2/%(project_id)s | +--------------+------------------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev2 admin http://controller:8776/v2/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 2928643fb2ac4bd99aa8cc795b55f7e1 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 052ab471e8ed4e5ca687bd73537935b5 | | service_name | cinderv2 | | service_type | volumev2 | | url | http://controller:8776/v2/%(project_id)s | +--------------+------------------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev3 public http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | a63b291e2ed4450e91ee574e4f9b4a7a | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | af52c95327614d1e9fb70286fcb552ea | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev3 internal http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | f47bd12c58af41298cdc7c5fe76cb8d4 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | af52c95327614d1e9fb70286fcb552ea | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev3 admin http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 6dd9f8c300754f32abecf2b77e241d5d | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | af52c95327614d1e9fb70286fcb552ea | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ {/collapse-item} {/collapse} 安装并配置组件 1、安装软件包 yum install -y openstack-cinder 2、配置 cinder.conf # sed -i.bak '/^#/d;/^$/d' /etc/cinder/cinder.conf # vim /etc/cinder/cinder.conf [database] # 配置数据库访问 connection = mysql+pymysql://cinder:000000@controller/cinder [DEFAULT] # 配置RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone # 控制器节点的管理接口 IP 地址 my_ip = 178.120.2.100 [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = 000000 [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/cinder/tmp 3、同步数据库 # su -s /bin/sh -c "cinder-manage db sync" cinder 配置计算节点使用块设备存储 配置 nova.conf # vim /etc/nova/nova.conf [cinder] os_region_name = RegionOne 完成安装 1、重启计算API服务 systemctl restart openstack-nova-api.service 2、启动设备块存储服务并设置开机自启 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service 安装并配置存储节点 先决条件 1、安装支持工具包 yum install -y lvm2 device-mapper-persistent-data systemctl enable lvm2-lvmetad.service && systemctl start lvm2-lvmetad.service 2、创建LVM物理卷 pvcreate /dev/sdb 3、创建LVM卷组 vgcreate cinder-volumes /dev/sdb 4、添加过滤器 # vim /etc/lvm/lvm.conf filter = [ "a/vdb/", "r/.*/"] {collapse} {collapse-item label="CMD"} [root@storage_node ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom vda 252:0 0 30G 0 disk ├─vda1 252:1 0 1G 0 part /boot └─vda2 252:2 0 29G 0 part └─centos-root 253:0 0 29G 0 lvm / vdb 252:16 0 100G 0 disk vdc 252:32 0 100G 0 disk [root@storage_node ~]# pvcreate /dev/vdb Physical volume "/dev/vdb" successfully created. [root@storage_node ~]# vgcreate cinder-volumes /dev/vdb Volume group "cinder-volumes" successfully create {/collapse-item} {/collapse} 安装并配置组件 1、安装软件包 yum install -y openstack-cinder targetcli python-keystone 2、配置 cinder.conf # sed -i.bak '/^#/d;/^$/d' /etc/cinder/cinder.conf # vim /etc/cinder/cinder.conf [database] # 配置数据库访问 connection = mysql+pymysql://cinder:000000@controller/cinder [DEFAULT] # RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone # 存储节点上管理网络接口的 IP 地址 my_ip = 178.120.2.192 # 启用 LVM 后端 enabled_backends = lvm # 配置镜像服务 API 的位置 glance_api_servers = http://controller:9292 [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = 000000 [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/cinder/tmp 完成安装 启动存储卷服务,设置开机自启 systemctl enable openstack-cinder-volume.service target.service systemctl start openstack-cinder-volume.service target.service
2022年07月19日
63 阅读
0 评论
0 点赞
2022-07-14
OpenStack-Pike 搭建之Dashboard(六)
Dashboard 安装和配置组件 1、安装软件包 yum install -y openstack-dashboard 2、配置 local_settings cp -a /etc/openstack-dashboard/local_settings /root/local_settings vim /etc/openstack-dashboard/local_settings # 仪表板在控制节点 OPENSTACK_HOST = "controller" # 允许主机访问仪表板 ALLOWED_HOSTS = ['*'] # 配置会话存储服务 memcached SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } # 启用身份 API 版本 3 OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST # 启用对域的支持 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True # 配置 API 版本 OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } # 配置为通过仪表板创建的用户的默认域:Default OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" # 配置为通过仪表板创建的用户的默认角色:user OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" # 如果选择网络选项 1,请禁用对第 3 层网络服务的支持 OPENSTACK_NEUTRON_NETWORK = { ... 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': False, 'enable_firewall': False, 'enable_vpn': False, 'enable_fip_topology_check': False, } {collapse} {collapse-item label="查看执行过程"} 配置 local_settings [root@controller ~]# cat /etc/openstack-dashboard/local_settings import os from django.utils.translation import ugettext_lazy as _ from openstack_dashboard.settings import HORIZON_CONFIG DEBUG = False WEBROOT = '/dashboard/' ALLOWED_HOSTS = ['*'] LOCAL_PATH = '/tmp' SECRET_KEY='04f3ac91f6f48932c88a' SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' OPENSTACK_HOST = "controller" OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" OPENSTACK_KEYSTONE_BACKEND = { 'name': 'native', 'can_edit_user': True, 'can_edit_group': True, 'can_edit_project': True, 'can_edit_domain': True, 'can_edit_role': True, } OPENSTACK_HYPERVISOR_FEATURES = { 'can_set_mount_point': False, 'can_set_password': False, 'requires_keypair': False, 'enable_quotas': True } OPENSTACK_CINDER_FEATURES = { 'enable_backup': False, } OPENSTACK_NEUTRON_NETWORK = { 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': False, 'enable_firewall': False, 'enable_vpn': False, 'enable_fip_topology_check': False, 'supported_vnic_types': ['*'], 'physical_networks': [], } OPENSTACK_HEAT_STACK = { 'enable_user_pass': True, } IMAGE_CUSTOM_PROPERTY_TITLES = { "architecture": _("Architecture"), "kernel_id": _("Kernel ID"), "ramdisk_id": _("Ramdisk ID"), "image_state": _("Euca2ools state"), "project_id": _("Project ID"), "image_type": _("Image Type"), } IMAGE_RESERVED_CUSTOM_PROPERTIES = [] API_RESULT_LIMIT = 1000 API_RESULT_PAGE_SIZE = 20 SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024 INSTANCE_LOG_LENGTH = 35 DROPDOWN_MAX_ITEMS = 30 TIME_ZONE = "UTC" POLICY_FILES_PATH = '/etc/openstack-dashboard' LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'console': { 'format': '%(levelname)s %(name)s %(message)s' }, 'operation': { 'format': '%(message)s' }, }, 'handlers': { 'null': { 'level': 'DEBUG', 'class': 'logging.NullHandler', }, 'console': { 'level': 'INFO', 'class': 'logging.StreamHandler', 'formatter': 'console', }, 'operation': { 'level': 'INFO', 'class': 'logging.StreamHandler', 'formatter': 'operation', }, }, 'loggers': { 'django.db.backends': { 'handlers': ['null'], 'propagate': False, }, 'requests': { 'handlers': ['null'], 'propagate': False, }, 'horizon': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'horizon.operation_log': { 'handlers': ['operation'], 'level': 'INFO', 'propagate': False, }, 'openstack_dashboard': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'novaclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'cinderclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'keystoneclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'glanceclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'neutronclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'heatclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'swiftclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'openstack_auth': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'nose.plugins.manager': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'django': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'iso8601': { 'handlers': ['null'], 'propagate': False, }, 'scss': { 'handlers': ['null'], 'propagate': False, }, }, } SECURITY_GROUP_RULES = { 'all_tcp': { 'name': _('All TCP'), 'ip_protocol': 'tcp', 'from_port': '1', 'to_port': '65535', }, 'all_udp': { 'name': _('All UDP'), 'ip_protocol': 'udp', 'from_port': '1', 'to_port': '65535', }, 'all_icmp': { 'name': _('All ICMP'), 'ip_protocol': 'icmp', 'from_port': '-1', 'to_port': '-1', }, 'ssh': { 'name': 'SSH', 'ip_protocol': 'tcp', 'from_port': '22', 'to_port': '22', }, 'smtp': { 'name': 'SMTP', 'ip_protocol': 'tcp', 'from_port': '25', 'to_port': '25', }, 'dns': { 'name': 'DNS', 'ip_protocol': 'tcp', 'from_port': '53', 'to_port': '53', }, 'http': { 'name': 'HTTP', 'ip_protocol': 'tcp', 'from_port': '80', 'to_port': '80', }, 'pop3': { 'name': 'POP3', 'ip_protocol': 'tcp', 'from_port': '110', 'to_port': '110', }, 'imap': { 'name': 'IMAP', 'ip_protocol': 'tcp', 'from_port': '143', 'to_port': '143', }, 'ldap': { 'name': 'LDAP', 'ip_protocol': 'tcp', 'from_port': '389', 'to_port': '389', }, 'https': { 'name': 'HTTPS', 'ip_protocol': 'tcp', 'from_port': '443', 'to_port': '443', }, 'smtps': { 'name': 'SMTPS', 'ip_protocol': 'tcp', 'from_port': '465', 'to_port': '465', }, 'imaps': { 'name': 'IMAPS', 'ip_protocol': 'tcp', 'from_port': '993', 'to_port': '993', }, 'pop3s': { 'name': 'POP3S', 'ip_protocol': 'tcp', 'from_port': '995', 'to_port': '995', }, 'ms_sql': { 'name': 'MS SQL', 'ip_protocol': 'tcp', 'from_port': '1433', 'to_port': '1433', }, 'mysql': { 'name': 'MYSQL', 'ip_protocol': 'tcp', 'from_port': '3306', 'to_port': '3306', }, 'rdp': { 'name': 'RDP', 'ip_protocol': 'tcp', 'from_port': '3389', 'to_port': '3389', }, } REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES', 'LAUNCH_INSTANCE_DEFAULTS', 'OPENSTACK_IMAGE_FORMATS', 'OPENSTACK_KEYSTONE_DEFAULT_DOMAIN', 'CREATE_IMAGE_DEFAULTS'] ALLOWED_PRIVATE_SUBNET_CIDR = {'ipv4': [], 'ipv6': []} {/collapse-item} {/collapse} 完成安装 重启 Web服务器 和 会话存储服务 systemctl restart httpd.service memcached.service
2022年07月14日
65 阅读
0 评论
0 点赞
2022-07-14
OpenStack-Pike 搭建之Neutron(五)
Neutron 安装和配置 控制节点 前置条件 1、创建数据库并授权 使用 root 用户登录数据库 mysql -u root -p000000 创建 neutron 数据库 CREATE DATABASE neutron; neutron 用户对 neutron数据库有所有权限 GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ IDENTIFIED BY '000000'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY '000000'; 2、获取 admin 凭证 . admin-openrc 3、创建服务凭证 创建 neutron 用户 openstack user create --domain default --password 000000 neutron 将 service项目 中的 neutron用户 设置为 admin角色 openstack role add --project service --user neutron admin 创建 neutron 服务实体 openstack service create --name neutron --description "OpenStack Networking" network 4、创建 网络服务 API端点 openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696 {collapse} {collapse-item label="查看执行过程"} 前置条件 [root@controller ~]# mysql -u root -p000000 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 68 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE neutron; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> exit Bye [root@controller ~]# . admin-openrc [root@controller ~]# openstack user create --domain default --password-prompt neutron User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | bd11a70055634b8996bdd7096ea91a60 | | name | neutron | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@controller ~]# openstack role add --project service --user neutron admin [root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Networking | | enabled | True | | id | 3f33133eae714fa492723f3617e8705f | | name | neutron | | type | network | +-------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 4df23df9efe547ea88b5ec0e01201c4a | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 3f33133eae714fa492723f3617e8705f | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 0412574e5e3f4b3ca5e7e18f753d7e80 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 3f33133eae714fa492723f3617e8705f | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 2251b821a7484bb0a5eb65697af351f6 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 3f33133eae714fa492723f3617e8705f | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ {/collapse-item} {/collapse} 配置网络选项(Falt 网络) [ 配置参考] :https://docs.openstack.org/neutron/latest/configuration/config.html 安装组件 yum install -y openstack-neutron openstack-neutron-ml2 \ openstack-neutron-linuxbridge ebtables 配置服务组件 配置 neutron.conf # sed -i.bak '/^#/d;/^$/d' /etc/neutron/neutron.conf # vim /etc/neutron/neutron.conf [database] # 配置数据库访问 connection = mysql+pymysql://neutron:000000@controller/neutron [DEFAULT] # 启用 ML2插件并禁用其他插件 core_plugin = ml2 service_plugins = # 配置RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone # 配置 Networking 以通知 Compute 网络拓扑更改 notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 000000 [nova] # 配置 Networking 以通知 Compute 网络拓扑更改 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = 000000 [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/neutron/tmp 配置 ML2插件 配置 ml2_conf.ini # sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/ml2_conf.ini # vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] # 启用平面和 VLAN 网络 type_drivers = flat,vlan # 禁用自助服务网络 tenant_network_types = # 启用 Linux 桥接机制 mechanism_drivers = linuxbridge # 启用端口安全扩展驱动程序 extension_drivers = port_security [securitygroup] # 启用 ipset 以提高安全组规则的效率 enable_ipset = true 配置 Linux网桥代理 配置 linuxbridge_agent.ini # sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini # vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] # 将Flat网络映射到物理网络接口 physical_interface_mappings = provider:eth0 [vxlan] # 禁用 VXLAN 覆盖网络 enable_vxlan = false [securitygroup] # 启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序 enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver 配置 DHCP代理 配置 dhcp_agent.ini # sed -i.bak '/^#/d;/^$/d' /etc/neutron/dhcp_agent.ini # vim /etc/neutron/dhcp_agent.ini [DEFAULT] # 配置 Linux 网桥接口驱动程序、Dnsmasq DHCP 驱动程序,并启用隔离元数据 interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true {collapse} {collapse-item label="查看执行过程"} 配置服务组件 [root@controller ~]# yum install -y openstack-neutron openstack-neutron-ml2 \ > openstack-neutron-linuxbridge ebtables [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/neutron.conf [root@controller ~]# vim /etc/neutron/neutron.conf [DEFAULT] # 启用 ML2插件并禁用其他插件 core_plugin = ml2 service_plugins = # 配置RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone # 配置 Networking 以通知 Compute 网络拓扑更改 notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [agent] [cors] [database] # 配置数据库访问 connection = mysql+pymysql://neutron:000000@controller/neutron [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 000000 [matchmaker_redis] [nova] # 配置 Networking 以通知 Compute 网络拓扑更改 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = 000000 [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/neutron/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [quotas] [ssl] [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/ml2_conf.ini [root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini [root@controller ~]# cat /etc/neutron/plugins/ml2/ml2_conf.ini [DEFAULT] [l2pop] [ml2] # 启用平面和 VLAN 网络 type_drivers = flat,vlan # 禁用自助服务网络 tenant_network_types = # 启用 Linux 桥接机制 mechanism_drivers = linuxbridge # 启用端口安全扩展驱动程序 extension_drivers = port_security [ml2_type_flat] [ml2_type_geneve] [ml2_type_gre] [ml2_type_vlan] [ml2_type_vxlan] [securitygroup] # 启用 ipset 以提高安全组规则的效率 enable_ipset = true [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini [root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [root@controller ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT] [agent] [linux_bridge] # 将Flat网络映射到物理网络接口 physical_interface_mappings = provider:eth0 [securitygroup] # 启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序 enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] # 禁用 VXLAN 覆盖网络 enable_vxlan = false [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/dhcp_agent.ini [root@controller ~]# vim /etc/neutron/dhcp_agent.ini [root@controller ~]# cat /etc/neutron/dhcp_agent.ini [DEFAULT] # 配置 Linux 网桥接口驱动程序、Dnsmasq DHCP 驱动程序,并启用隔离元数据 interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true [agent] [ovs] {/collapse-item} {/collapse} 配置元数据代理 配置 metadata_agent.ini sed -i.bak '/^#/d;/^$/d' /etc/neutron/metadata_agent.ini vim /etc/neutron/metadata_agent.ini [DEFAULT] # 配置元数据主机和共享密钥 nova_metadata_host = controller metadata_proxy_shared_secret = 000000 {collapse} {collapse-item label="查看执行过程"} 配置元数据代理 [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/metadata_agent.ini [root@controller ~]# vim /etc/neutron/metadata_agent.ini [root@controller ~]# cat /etc/neutron/metadata_agent.ini [DEFAULT] # 配置元数据主机和共享密钥 nova_metadata_host = controller metadata_proxy_shared_secret = 000000 [agent] [cache] {/collapse-item} {/collapse} 配置计算服务使用网络服务 配置 nova.conf vim /etc/nova/nova.conf [neutron] # 配置访问参数、启用元数据代理和配置密钥 url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 000000 service_metadata_proxy = true metadata_proxy_shared_secret = 000000 {collapse} {collapse-item label="查看执行过程"} 配置计算服务使用网络服务 [root@controller ~]# vim /etc/nova/nova.conf {/collapse-item} {/collapse} 完成安装 1、创建 plugin.ini 链接 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini 2、同步 neutron 数据库 su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron 3、重启 nova-api 服务 systemctl restart openstack-nova-api.service 4、启动网络服务设置开机自启 systemctl enable neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service systemctl start neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service 5、开启路由转发 [root@controller ~]# vim /etc/sysctl.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 [root@controller ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 {collapse} {collapse-item label="查看执行过程"} 完成安装 [root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ > --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Running upgrade for neutron ... INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> kilo, kilo_initial INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225, nsxv_vdr_metadata.py INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151, neutrodb_ipam INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf, Initial operations in support of address scopes INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee, Flavor framework INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f, network_rbac INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773, quota_usage INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592, subnetpool hash INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7, add order to dnsnameservers INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79, address scope support in subnetpool INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051, qos db changes INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136, quota_reservations INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59, Add dns_name to Port INFO [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d, Add availability zone INFO [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a, add is_default to subnetpool INFO [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25, Add standard attribute table INFO [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee, Add network availability zone INFO [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9, Add router availability zone INFO [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4, Add ip_version to AddressScope INFO [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664, Add tables and attributes to support external DNS integration INFO [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5, add_unique_ha_router_agent_port_bindings INFO [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f, Auto Allocated Topology - aka Get-Me-A-Network INFO [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821, add dynamic routing model data INFO [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4, add_bgp_dragent_model_data INFO [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81, rbac_qos_policy INFO [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6, Add resource_versions row to agent table INFO [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532, tag support INFO [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f, add_timestamp_to_base_resources INFO [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a, Add desc to standard attr table INFO [alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b, qos dscp db addition INFO [alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73, Add support for VLAN trunking INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99, Initial no-op Liberty contract rule. INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada, network_rbac INFO [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016, Drop legacy OVS and LB plugin tables INFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3, Metaplugin removal INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d, Add missing foreign keys INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d, add geneve ml2 type driver INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297, Drop cisco monolithic tables INFO [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c, Drop embrane plugin table INFO [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39, standardattributes migration INFO [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b, DVR sheduling refactoring INFO [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050, Drop NEC plugin tables INFO [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9, rbac_qos_policy INFO [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada, network_rbac_external INFO [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc, standard_desc INFO [alembic.runtime.migration] Running upgrade 4ffceebfcdc -> 7bbb25278f53, device_owner_ha_replicate_int INFO [alembic.runtime.migration] Running upgrade 7bbb25278f53 -> 89ab9a816d70, Rename ml2_network_segments table INFO [alembic.runtime.migration] Running upgrade 5abc0278ca73 -> d3435b514502, Add device_id index to Port INFO [alembic.runtime.migration] Running upgrade d3435b514502 -> 30107ab6a3ee, provisioning_blocks.py INFO [alembic.runtime.migration] Running upgrade 30107ab6a3ee -> c415aab1c048, add revisions table INFO [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4, add dns name to portdnses INFO [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90, Add segment_id to subnet INFO [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4, Add segment_host_mapping table. INFO [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426, Rename ml2_dvr_port_bindings INFO [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524, Remove mtu column from networks. INFO [alembic.runtime.migration] Running upgrade a963b38d82f4 -> 3d0e74aa7d37, Add flavor_id to Router INFO [alembic.runtime.migration] Running upgrade 3d0e74aa7d37 -> 030a959ceafa, uniq_routerports0port_id INFO [alembic.runtime.migration] Running upgrade 030a959ceafa -> a5648cfeeadf, Add support for Subnet Service Types INFO [alembic.runtime.migration] Running upgrade a5648cfeeadf -> 0f5bef0f87d4, add_qos_minimum_bandwidth_rules INFO [alembic.runtime.migration] Running upgrade 0f5bef0f87d4 -> 67daae611b6e, add standardattr to qos policies INFO [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc, uniq_floatingips0floating_network_id0fixed_port_id0fixed_ip_addr INFO [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d, Add ip_allocation to port INFO [alembic.runtime.migration] Running upgrade 5cd92597d11d -> 929c968efe70, add_pk_version_table INFO [alembic.runtime.migration] Running upgrade 929c968efe70 -> a9c43481023c, extend_pk_with_host_and_add_status_to_ml2_port_binding INFO [alembic.runtime.migration] Running upgrade a9c43481023c -> 804a3c76314c, Add data_plane_status to Port INFO [alembic.runtime.migration] Running upgrade 804a3c76314c -> 2b42d90729da, qos add direction to bw_limit_rule table INFO [alembic.runtime.migration] Running upgrade 2b42d90729da -> 62c781cb6192, add is default to qos policies INFO [alembic.runtime.migration] Running upgrade 62c781cb6192 -> c8c222d42aa9, logging api INFO [alembic.runtime.migration] Running upgrade c8c222d42aa9 -> 349b6fd605a6, Add dns_domain to portdnses INFO [alembic.runtime.migration] Running upgrade 349b6fd605a6 -> 7d32f979895f, add mtu for networks INFO [alembic.runtime.migration] Running upgrade b67e765a3524 -> a84ccf28f06a, migrate dns name from port INFO [alembic.runtime.migration] Running upgrade a84ccf28f06a -> 7d9d8eeec6ad, rename tenant to project INFO [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab, Add routerport bindings for L3 HA INFO [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0, migrate to pluggable ipam INFO [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62, add standardattr to qos policies INFO [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353, Add Name and Description to the networksegments table INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586, Add binding index to RouterL3AgentBinding INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d, Remove availability ranges. OK [root@controller ~]# systemctl restart openstack-nova-api.service [root@controller ~]# systemctl enable neutron-server.service \ > neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ > neutron-metadata-agent.service Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service. [root@controller ~]# systemctl start neutron-server.service \ > neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ > neutron-metadata-agent.service {/collapse-item} {/collapse} 安装和配置 计算节点 安装组件 yum install -y openstack-neutron-linuxbridge ebtables ipset 配置通用组件 配置 neutron.conf # sed -i.bak '/^#/d;/^$/d' /etc/neutron/neutron.conf # vim /etc/neutron/neutron.conf [DEFAULT] # 配置RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 000000 [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/neutron/tmp {collapse} {collapse-item label="查看执行过程"} 配置通用组件 [root@compute ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/neutron.conf [root@compute ~]# vim /etc/neutron/neutron.conf [root@compute ~]# cat /etc/neutron/neutron.conf [DEFAULT] # 配置RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone [agent] [cors] [database] [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 000000 [matchmaker_redis] [nova] [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/neutron/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [quotas] [ssl] {/collapse-item} {/collapse} 配置网络选项(Flat网络) 配置 linuxbridge_agent.ini # sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini # vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] # 将Flat网络映射到物理网络接口 physical_interface_mappings = provider:eth0 [vxlan] # 禁用 VXLAN 覆盖网络 enable_vxlan = false [securitygroup] # 启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序 enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver {collapse} {collapse-item label="查看执行过程"} 配置网络选项(Flat网络) [root@compute ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini [root@compute ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [root@compute ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT] [agent] [linux_bridge] # 将Flat网络映射到物理网络接口 physical_interface_mappings = provider:eth0 [securitygroup] # 启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序 enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] # 禁用 VXLAN 覆盖网络 enable_vxlan = false {/collapse-item} {/collapse} 配置计算服务使用网络服务 配置 nova.conf # vim /etc/nova/nova.conf [neutron] # 配置访问参数 url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 000000 {collapse} {collapse-item label="查看执行过程"} 配置计算服务使用网络服务 [root@compute ~]# vim /etc/nova/nova.conf [neutron] # 配置访问参数 url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 000000 {/collapse-item} {/collapse} 完成安装 1、重启计算服务 systemctl restart openstack-nova-compute.service 2、启动 网桥服务并设置开机自启 systemctl enable neutron-linuxbridge-agent.service systemctl start neutron-linuxbridge-agent.service 3、开启路由转发 [root@compute ~]# vim /etc/sysctl.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 [root@compute ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 {collapse} {collapse-item label="查看执行过程"} 完成安装 root@compute ~]# systemctl restart openstack-nova-compute.service [root@compute ~]# systemctl enable neutron-linuxbridge-agent.service systemctl start neutron-linuxbridge-agent.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. [root@compute ~]# systemctl start neutron-linuxbridge-agent.service [root@compute ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 {/collapse-item} {/collapse}
2022年07月14日
51 阅读
0 评论
0 点赞
1
2
...
13