首页
归档
留言板
时光轴
推荐
Cloud
图床
导航
Search
1
openstact 基础环境安装 (手动版)
298 阅读
2
域名
191 阅读
3
Mariadb 主从复制&读写分离
186 阅读
4
查询化妆品许可证信息
179 阅读
5
模拟登录古诗文网
178 阅读
Python
Linux
随笔
mysql
openstack
登录
Search
标签搜索
Pike
linux
python
爬虫
mysql
openstack
Essay
Ansible
docker
Zabbix
Internet
Redis
1+X
Hyper-V
jenkins
Acha
累计撰写
65
篇文章
累计收到
1
条评论
首页
栏目
Python
Linux
随笔
mysql
openstack
页面
归档
留言板
时光轴
推荐
Cloud
图床
导航
搜索到
16
篇与
openstack
的结果
2022-07-13
OpenStack-Pike 搭建之Keystone(二)
Keystone 概述 The OpenStack Identity service provides a single point of integration for managing authentication, authorization, and a catalog of services. The Identity service is typically the first service a user interacts with. Once authenticated, an end user can use their identity to access other OpenStack services. Likewise, other OpenStack services leverage the Identity service to ensure users are who they say they are and discover where other services are within the deployment. The Identity service can also integrate with some external user management systems (such as LDAP). Users and services can locate other services by using the service catalog, which is managed by the Identity service. As the name implies, a service catalog is a collection of available services in an OpenStack deployment. Each service can have one or many endpoints and each endpoint can be one of three types: admin, internal, or public. In a production environment, different endpoint types might reside on separate networks exposed to different types of users for security reasons. For instance, the public API network might be visible from the Internet so customers can manage their clouds. The admin API network might be restricted to operators within the organization that manages cloud infrastructure. The internal API network might be restricted to the hosts that contain OpenStack services. Also, OpenStack supports multiple regions for scalability. For simplicity, this guide uses the management network for all endpoint types and the default RegionOne region. Together, regions, services, and endpoints created within the Identity service comprise the service catalog for a deployment. Each OpenStack service in your deployment needs a service entry with corresponding endpoints stored in the Identity service. This can all be done after the Identity service has been installed and configured. The Identity service contains these components: Server A centralized server provides authentication and authorization services using a RESTful interface. Drivers Drivers or a service back end are integrated to the centralized server. They are used for accessing identity information in repositories external to OpenStack, and may already exist in the infrastructure where OpenStack is deployed (for example, SQL databases or LDAP servers). Modules Middleware modules run in the address space of the OpenStack component that is using the Identity service. These modules intercept service requests, extract user credentials, and send them to the centralized server for authorization. The integration between the middleware modules and OpenStack components uses the Python Web Server Gateway Interface. 安装和配置 创建 数据库 1、使用root身份连接数据库 [root@controller ~]# mysql -u root -p000000 2、创建keystone数据库 MariaDB [(none)]> CREATE DATABASE keystone; Query OK, 1 row affected (0.00 sec) 3、授予 keystone用户 对 keystone数据库 所有权限 MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) > Tip: > 删除数据库用户 MariaDB [(none)]> drop user keystonee@'%'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> drop user keystone@'localhost'; Query OK, 0 rows affected (0.00 sec) 安装 和 配置组件 1、安装软件包 [root@controller ~]# yum install -y openstack-keystone httpd mod_wsgi 2、配置 keystone.conf > Tips: > 去除空行,注释并备份原文件 [root@localhost ~]# sed -i.bak '/^$/d;/^#/d' xxx.conf [root@controller ~]# sed -i.bak '/^$/d;/^#/d' /etc/keystone/keystone.conf [root@controller ~]# vim /etc/keystone/keystone.conf [database] # 配置数据库访问 connection = mysql+pymysql://keystone:000000@controller/keystone [token] # 配置 Fernet 令牌提供程序 provider = fernet 3、同步 keystone 数据库 root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone 4、初始化 Fernet 密钥存储库 [root@controller ~]# keystone-manage fernet_setup \ --keystone-user keystone --keystone-group keystone [root@controller ~]# keystone-manage credential_setup \ --keystone-user keystone --keystone-group keystone 5、引导认证服务 [root@controller ~]# keystone-manage bootstrap --bootstrap-password 000000 \ --bootstrap-admin-url http://controller:35357/v3/ \ --bootstrap-internal-url http://controller:5000/v3/ \ --bootstrap-public-url http://controller:5000/v3/ \ --bootstrap-region-id RegionOne 配置 Apache Http 服务器 1、编辑 httpd.conf,配置 ServerName 选项 [root@controller ~]# vim /etc/httpd/conf/httpd.conf ServerName controller 2、创建 wsgi-keystone.conf 链接 [root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ 完成安装 1、启动 http 服务,并设置开机自启 [root@controller ~]# systemctl enable httpd.service [root@controller ~]# systemctl start httpd.service 2、配置管理账户 export OS_USERNAME=admin export OS_PASSWORD=000000 export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 创建 域、项目、用户、角色 1、创建 service 项目 [root@controller ~]# openstack project create --domain default \ --description "Service Project" service +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Service Project | | domain_id | default | | enabled | True | | id | 4dd5063ebfc344a0a12734082438fbe0 | | is_domain | False | | name | service | | parent_id | default | +-------------+----------------------------------+ 2、创建 普通项目和用户(例如 demo) 创建 demo 项目 [root@controller ~]# openstack project create --domain default \ --description "Demo Project" demo +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Demo Project | | domain_id | default | | enabled | True | | id | 4f74b708452249e583684682e8254872 | | is_domain | False | | name | demo | | parent_id | default | +-------------+----------------------------------+ 创建 demo 用户 [root@controller ~]# openstack user create --domain default --password 000000 demo +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 9da7d8fa3eaf414bad4e2bcbabb60494 | | name | demo | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ 创建 user 角色 [root@controller ~]# openstack role create user +-----------+----------------------------------+ | Field | Value | +-----------+----------------------------------+ | domain_id | None | | id | 1d30cc80e2eb450193340e8fff44b094 | | name | user | +-----------+----------------------------------+ 设置 demo 项目 中的 demo用户 角色为 user [root@controller ~]# openstack role add --project demo --user demo user 验证 1、取消 临时变量 OS_AUTH_URL OS_PASSWORD unset OS_AUTH_URL OS_PASSWORD 2、测试 admin用户获取 token [root@controller ~]# openstack --os-auth-url http://controller:35357/v3 \ --os-project-domain-name Default --os-user-domain-name Default \ --os-project-name admin --os-username admin token issue Password: +------------+---------------------------------------------------------------+ | Field | Value | +------------+---------------------------------------------------------------+ | expires | 2022-07-13T04:15:29+0000 | | id | gAAAAABizjjRJyEcHq4dPJMFZMjTCOaVFwOX4sumi1ZsKVgvWfxIPtyaRenrX | | LPKPW1L4nLjeAff1kN2Oa9eTgleTj8TOeoSln9hUUZByEqSlNJFaZC_DUgT5gW| | AjlXHbUH_6r9IiGRcJBGJTAEU5sEmrW2M_sBAnIXQZ5Tn2n_MY_KWO68lpi8 | | project_id | cecafb35ed3649819247ea27a77871aa | | user_id | 6ba1420ce7764421afa3da461b2f47a1 | +------------+---------------------------------------------------------------+ 3、测试 demo用户获取 token [root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \ --os-project-domain-name Default --os-user-domain-name Default \ --os-project-name demo --os-username demo token issue Password: +------------+---------------------------------------------------------------+ | Field | Value | +------------+---------------------------------------------------------------| | expires | 2022-07-13T04:21:47+0000 | | id | gAAAAABizjpLC8BxxoXOGYJah3VU8xMD8aYQm86RjhAOyKI3zAzuc6wMR3v8Hj| | Atk1n3RIGuNmFWEZSPKQH-zYGAS4ZEzQzvUTAQxeNm4eOG3bLF2iqXc7F1cYIL| | q6gVKkfGK2avDi7APIoxCFy2F_XtxNlqWMNrxfOyGXpB81rKDBjpEsLMMI | | project_id | 4f74b708452249e583684682e8254872 | | user_id | 9da7d8fa3eaf414bad4e2bcbabb60494 | +------------+---------------------------------------------------------------+ 创建 OpenStack客户端变量 脚本 创建 脚本 1、创建 admin-openrc [root@controller ~]# cat admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=000000 export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 2、创建 demo-openrc [root@controller ~]# cat demo-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=000000 export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 使用 脚本 1、加载 变量 [root@controller ~]# . admin-openrc or [root@controller ~]# source admin-openrc 2、获取 token [root@controller ~]# openstack token issue +------------+---------------------------------------------------------------+ | Field | Value | +------------+---------------------------------------------------------------+ | expires | 2022-07-13T04:28:33+0000 | | id | gAAAAABizjjRJyEcHq4dPJMFZMjTCOaVFwOX4sumi1ZsKVgvWfxIPtyaRenrX | | LPKPW1L4nLjeAff1kN2Oa9eTgleTj8TOeoSln9hUUZByEqSlNJFaZC_DUgT5gW| | AjlXHbUH_6r9IiGRcJBGJTAEU5sEmrW2M_sBAnIXQZ5Tn2n_MY_KWO68lpi8 | | project_id | cecafb35ed3649819247ea27a77871aa | | user_id | 6ba1420ce7764421afa3da461b2f47a1 | +------------+---------------------------------------------------------------+ 3、取消变量 # 退出 bash [root@controller ~]# exit
2022年07月13日
37 阅读
0 评论
0 点赞
2022-07-12
OpenStack-Pike 搭建之基础环境(一)
规定密码 所有密码设置为: `000000` Passwords 密码名称 描述 密码 数据库密码(未使用变量) 数据库 root密码 000000 ADMIN_PASS admin 用户密码 000000 CINDER_DBPASS 块存储服务 数据库密码 000000 CINDER_PASS 块存储服务用户密码 000000 DASH_DBPASS 仪表板 数据库密码 000000 DEMO_PASS demo 用户密码 000000 GLANCE_DBPASS 镜像服务 数据库密码 000000 GLANCE_PASS 镜像服务 用户密码 000000 KEYSTONE_DBPASS 认证服务 数据库密码 000000 METADATA_SECRE 元数据代理 密码 000000 NEUTRON_DBPASS 网络服务 数据库密码 000000 NEUTRON_PASS 网络服务 用户密码 000000 NOVA_DBPASS 计算服务 数据库密码 000000 NOVA_PASS 计算服务 用户密码 000000 PLACEMENT_PASS 安置服务用户 密码 000000 RABBIT_PASS RabbitMQ 用户密码 000000 参考:https://docs.openstack.org/install-guide/environment-security.html 网络配置 控制节点 网络接口 [root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE="Ethernet" BOOTPROTO="none" NAME="eth0" DEVICE="eth0" ONBOOT="yes" IPADDR="178.120.2.10" PREFIX="24" GATEWAY="178.120.2.1" DNS1="8.8.8.8" ## provider interface: DEVICE=INTERFACE_NAME TYPE=Ethernet ONBOOT="yes" BOOTPROTO="none" 名称解析 [root@controller ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 178.120.2.10 controller 178.120.2.20 compute 免密登录 [root@controller ~]# ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/root/.ssh/id_dsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_dsa. Your public key has been saved in /root/.ssh/id_dsa.pub. The key fingerprint is: SHA256:cgfolkfd6Oum3nYdVI+HO8llHk1hKR0YaOITw2gTQAE root@controller The key's randomart image is: +---[DSA 1024]----+ | Eo+o.+ ..+++| | = * = o.oo| | o = B . .=o| | . o = +.*| | = S + o B.| | . + . . * .| | . . o | | oo . . | | .o+o. | +----[SHA256]-----+ [root@controller ~]# ssh-copy-id compute /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_dsa.pub" The authenticity of host 'compute (178.120.2.20)' can't be established. ECDSA key fingerprint is SHA256:R/Thnqei+6YxNhVzNn26mnzVaBME9Pq1takAI7dH/Sg. ECDSA key fingerprint is MD5:c3:f7:bb:e1:07:f9:83:d5:2e:d2:ae:c6:da:a3:2e:f7. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@compute's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'compute'" and check to make sure that only the key(s) you wanted were added. 计算节点 网络接口 [root@compute ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE="Ethernet" BOOTPROTO="none" NAME="eth0" UUID="e7df2db2-cdb1-47e0-9d3b-05b50fe87c19" DEVICE="eth0" ONBOOT="yes" IPADDR="178.120.2.20" PREFIX="24" GATEWAY="178.120.2.1" DNS1="8.8.8.8" ## provider interface: DEVICE=INTERFACE_NAME TYPE=Ethernet ONBOOT="yes" BOOTPROTO="none" 名称解析 [root@compute ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 178.120.2.10 controller 178.120.2.20 compute 免密登录 [root@compute ~]# ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/root/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_dsa. Your public key has been saved in /root/.ssh/id_dsa.pub. The key fingerprint is: SHA256:juNTp0HHsnRvNPt+xflNUQ9HqJc0CZMcTI49yexZWX4 root@compute The key's randomart image is: +---[DSA 1024]----+ | +=+ +o| | *+o*+o| | .. Boo=E| | + +.++o.o| | oS= oo+ .o| | o+ . + .+| | o..+ . . .+| | .... . +| | .. ... | +----[SHA256]-----+ [root@compute ~]# ssh-copy-id controller /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_dsa.pub" The authenticity of host 'controller (178.120.2.10)' can't be established. ECDSA key fingerprint is SHA256:ZjMIFXctwUyBC2Psc5ZxN4wVTAASjzf8re8aq8v11S4. ECDSA key fingerprint is MD5:2a:f3:cd:5a:ec:2b:ca:20:99:c7:0b:6d:db:b0:1b:92. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@controller's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'controller'" and check to make sure that only the key(s) you wanted were added.路由转发 vim /etc/sysctl.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 Yum源配置 所有节点 # sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo # cat CentOS-OpenStack-Pike.repo [OpenStack-Pike-tuna] name=OpenStack-Pike-tuna baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos-vault/7.5.1804/cloud/x86_64/openstack-pike/ gpgcheck=0 enabled=1 # yum makecache 最新yum源不支持pike版,需手动设置cloud 收集 RPM 包(可选) [root@controller ~]# vim /etc/yum.conf [main] # 缓存目录 cachedir=/data/rpm # 开启缓存收集 keepcache=1 关闭 防火墙 & Selinux 所有节点 # systemctl stop firewalld && systemctl disable firewalld # setenforce 0 # sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config # yum remove -y NetworkManager firewalld # yum -y install iptables-services # iptables -F # iptables -X # iptables -Z # iptables-save 时间同步(Chrony) 控制节点 [root@controller ~]# yum install -y chrony [root@controller ~]# timedatectl set-timezone Asia/Shanghai [root@controller ~]# grep -Ev "#|^$" /etc/chrony.conf server ntp.aliyun.com iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync allow 178.120.2.0/24 logdir /var/log/chrony systemctl enable chronyd.service && systemctl start chronyd.service 计算节点 [root@compute ~]# yum install -y chrony [root@compute ~]# timedatectl set-timezone Asia/Shanghai [root@compute ~]# grep -Ev "#|^$" /etc/chrony.conf server controller iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony [root@compute ~]# systemctl enable chronyd.service && systemctl start chronyd.service 验证 # 控制节点 [root@controller ~]# chronyc sources 210 Number of sources = 4 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 139.199.215.251 2 6 367 43 +392us[+1161us] +/- 48ms ^? ntp6.flashdance.cx 2 7 40 368 -5153us[-4963us] +/- 178ms ^- time.cloudflare.com 3 6 355 43 +50ms[ +50ms] +/- 176ms ^- stratum2-1.ntp.mow01.ru.> 2 6 367 42 +31ms[ +31ms] +/- 89ms [root@controller ~]# date Tue Jul 12 17:26:01 CST 2022 # 其他节点 [root@compute ~]# chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^? controller 0 7 0 - +0ns[ +0ns] +/- 0ns [root@compute ~]# date Tue Jul 12 17:26:56 CST 2022 OpenStack 客户端 [root@controller ~]# yum install -y python-openstackclient openstack-selinux 数据库(Mariadb) 安装 MySQL数据库服务、python连接MySQL数据库工具 [root@controller ~]# yum install -y mariadb mariadb-server python2-PyMySQL 配置 mysql [root@controller ~]# vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 178.120.2.10 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 启动服务 [root@controller ~]# systemctl enable mariadb.service [root@controller ~]# systemctl start mariadb.service 初始化数据库 [root@controller ~]# mysql_secure_installation 消息队列(Rabbitmq) 安装 Rabiitmq 服务 [root@controller ~]# yum install -y rabbitmq-server 启动 Rabiitmq 服务 [root@controller ~]# systemctl enable rabbitmq-server.service [root@controller ~]# systemctl start rabbitmq-server.service 添加 openstack 用户 [root@controller ~]# rabbitmqctl add_user openstack 000000 配置 openstack 用户权限 [root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*" 开启 图形化插件(可选) [root@controller ~]# rabbitmq-plugins enable rabbitmq_management [root@controller ~]# rabbitmq-plugins enable rabbitmq_management_agent 访问 IP:15672 缓存服务(Memcached) 安装 Memcached 服务 [root@controller ~]# yum install -y memcached python-memcached 修改 Memcached 配置 [root@controller ~]# vim /etc/sysconfig/memcached # 允许其他节点通过管理网络访问 PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="64" OPTIONS="-l controller" 启动 Memcached 服务 [root@controller ~]# systemctl enable memcached.service [root@controller ~]# systemctl start memcached.service
2022年07月12日
70 阅读
0 评论
0 点赞
2022-03-28
Hyper-V的简单使用
Hyper-V 前几天在 Windows 跑了 docker ,晚上有事要运行一个虚拟机。结果,虚拟机跑不起来了,泪奔。报错如下,盲猜Docker 和 VMware 的环境起冲突了。去百度了一波,把 Hyper-V 关了,重启一下 ,VMware Workstation可以运行了,Docker 就GG了。用命令开关Hyper-V,也可以接收,但重启就接受不了了。 VMware Workstation 与 Device/Credential Guard 不兼容。 在禁用 Device/Credential Guard 后,可以运行 VMware Workstati 几经百度,终于找到解决方法,使用VMware Workstation 15.5.5以后版本。在 VMware Workstation 15.5.5 Pro 发行说明,支持 Windows 10 主机 VBS。解决了开启 Hyper-V 不能运行的问题,但 VMware 开启不了虚拟化,希望在后面版本可以支持。嵌套虚拟化解决方案,使用Hyper-V创建虚拟机。 VMware Workstation 15.5.5 Pro 发行说明 新增功能 VMware Workstation 15.5.5 Pro 发行说明 已知问题 开启 Hyper-V 在 "控制面板" 的 "程序" 中 选择 "关闭 和 开启 Windows 功能" 勾选 Hyper-V。 如果你是家庭版没有找到 Hyper-V 选项,复制下面文本,新建 bat脚本并运行。 pushd "%~dp0" dir /b %SystemRoot%\servicing\Packages\*Hyper-V*.mum >hyper-v.txt for /f %%i in ('findstr /i . hyper-v.txt 2^>nul') do dism /online /norestart /add-package:"%SystemRoot%\servicing\Packages\%%i" del hyper-v.txt Dism /online /enable-feature /featurename:Microsoft-Hyper-V-All /LimitAccess /ALL 使用 Hyper-V 管理中心 在 "windows工具" 中 点击 Hyper-V管理中心 具体使用和 VMware Workstation 差不多,CPU 代数选一代,测试 CentOS用二代的运行不起来。 虚拟交换机对应 VMware Workstation 中的虚拟网络编辑器,外部、内部、专用网络分别对应桥接、NAT、仅主机。 虚拟交换机 的配置没 VMware Workstation 智能,其中 NAT 部分需要手动调节,默认使用 192.168.131.0/24 网段。 更改网段需要修改注册表,可以使用下面脚本快速修改,还需要把可以使用的物理网络连接共享到该网络。 @echo off set /p q=Pleasl input ShareIP [192.168.173.1]: reg add "HKLM\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters" -v ScopeAddress -d %q% -f reg add "HKLM\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters" -v ScopeAddressBackup -d %q% -f timeout /t 10 /nobreak 其中,检查点对应的快照。还可以设置开机自启,这个是 VMware Workstation 中没有的功能 参考文章 启用 Hyper-V NAT 网络配置 修改注册表
2022年03月28日
121 阅读
0 评论
0 点赞
2022-03-28
docker for windows
docker for windows Dokcer 在 Windows 上使用之前版本使用的是 Hyper-V,现在使用的 WSL2 。对于 Hyper-V 在了解虚拟化的时候知道是Microsoft 的 虚拟化技术。VMware Workstation 用惯了,也没去了解 Hyper-V 在 windows上的使用。现在,在Windows 使用 Docker 的时候遇到,顺便写写。 WSL2 不知道是啥,但是 "linux for windows" 是听过的。Windows11 在发布的时候出现了 "android for windows" 。了解完,也只知道是WSL基于Hyper-V。也用管它是啥,好用就行了。还想找找它的管理界面,结果没找到。只找到,子系统在应用商店安装使用,没做了解,只是Docker安装好了提示你安装WSL。 安装 docker 文档地址: https://docs.docker.com/desktop/windows/install/ 安装包地址:https://desktop.docker.com/win/main/amd64/Docker%20Desktop%20Installer.exe 硬件先决条件: 1、必须在 BIOS 设置中启用 BIOS 级硬件虚拟化支持 2、系统必须支持WSL2 或 Hyper-V 3、4GB 系统内存 安装过程默认下一步,安装完成,重启就可以正常使用。 Windows 上启用 WSL 2 功能 1、启动 WSL dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart 2、启用 虚拟机功能 dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart 3、下载 Linux 内核更新包并运行 https://wslstorestorage.blob.core.windows.net/wslblob/wsl_update_x64.msi 4、将 WSL 2 设置为默认版本 wsl --set-default-version 2 参考地址:https://docs.microsoft.com/en-us/windows/wsl/install-manual 使用 docker 在 Windws 的 CMD 就可以调用 Dokcer CLI,也可以在 Docker 应用中操作 Windows Terminal Windows11 上默认 安装了 Windows Terminal ,个人感觉非常好用,字体配色比较好看,还安装了Vim。在 Windows 的命令行中也可以自由编辑,可以提高效率,适应在Shell中配置的感觉。windows10 用户可以自行下载。 项目地址:https://github.com/microsoft/terminal/ 应用界面 菜单:Container、imager、Volume 熟悉 CLI ,大多还会使用命令行,用着比较高效。 应用界面观看运行状态及存储空间也是不错的选择。
2022年03月28日
72 阅读
0 评论
0 点赞
2021-12-04
1+X 云计算初级( 操作题 )
1+X 初级 by 王政(blog http://blog.youto.club) # 说明: 本次视频,只做文档演示,没有详细讲解 # 视频地址:https://www.bilibili.com/video/BV1hL411M72w 1+X 初级 环境准备 网络服务 本地YUM源管理 FTP安装使用 Samba管理 NFS 服务管理 主从数据库管理 部署WordPress应用 拓展 LVM 数据库管理 MariaDB管理 Openstack运维 OpenStack Keystone管理 OpenStack Glance管理 OpenStack Cinder管理 OpenStack Nova管理 OpenStack Swift管理 Docker Docker安装 Docker运维 Docker管理 Dockerfile编写 Dockerfile 编写 部署Swarm集群 环境准备 yum源 selinux & firewalld hostname hosts 网络服务 本地YUM源管理 使用VMWare软件启动提供的xserver1虚拟机(配置虚拟机xserver1的IP为192.168.100.11,主机名为xserver1),在虚拟机的/root目录下,存在一个CentOS-7-x86_64-DVD-1511.iso的镜像文件,使用这个镜像文件配置本地yum源,要求将这个镜像文件挂在/opt/centos目录,请问如何配置自己的local.repo文件,使得可以使用该镜像中的软件包,安装软件。请将local.repo文件的内容以文本形式提交到答题框。 [root@xserver1 ~]# mkdir /opt/centos [root@xserver1 ~]# mount -o loop CentOS-7-x86_64-DVD-1511.iso /opt/centos/ mount: /dev/loop0 is write-protected, mounting read-only [root@xserver1 ~]# rm -f /etc/yum.repos.d/* [root@xserver1 ~]# cat /etc/yum.repos.d/local.repo [centos] name=centos baseurl=file:///opt/centos gpgcheck=0 enabled=1 # 答题框 [root@xserver1 ~]# cat /etc/yum.repos.d/local.repo [centos] name=centos baseurl=file:///opt/centos gpgcheck=0 enabled=1 FTP安装使用 使用xserver1虚拟机,安装ftp服务,并配置ftp的共享目录为/opt。使用VMWare软件继续启动提供的xserver2虚拟机(配置虚拟机xserver2的IP为192.168.100.12,主机名为xserver2),并创建该虚拟机的yum源文件ftp.repo使用xserver1的ftp源(配置文件中的FTP地址使用主机名)。配置完成后,将xserver2节点的ftp.repo文件以文本形式提交到答题框。 yum install -y vsftpd echo "anon_root=/opt/" >> /etc/vsftpd/vsftpd.conf systemctl start vsftpd && systemctl enable vsftpd [root@xserver2 ~]# cat /etc/yum.repos.d/ftp.repo [centos] name=centos baseurl=ftp://xserver1/centos gpgcheck=0 enabled=1 # 答题框 [root@xserver2 ~]# cat /etc/yum.repos.d/ftp.repo [centos] name=centos baseurl=ftp://xserver1/centos gpgcheck=0 enabled=1 Samba管理 使用xserver1虚拟机,安装Samba服务所需要的软件包,将xserver1节点中的/opt/share目录使用Samba服务共享出来(目录不存在请自行创建)。操作完毕后,将xserver1节点Samba配置文件中的[share]段落和执行netstat -ntpl命令的返回结果以文本形式提交到答题框。 yum install -y samba vi /etc/samba/smb.conf # 添加 [share] comment = Share Directories path=/opt/share writable = yes systemctl restart smb # 答题框 [share] comment = Share Directories path=/opt/share writable = yes [root@xserver1 ~]# netstat -nltp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN 3527/smbd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1429/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1944/master tcp 0 0 0.0.0.0:445 0.0.0.0:* LISTEN 3527/smbd tcp6 0 0 :::139 :::* LISTEN 3527/smbd tcp6 0 0 :::21 :::* LISTEN 3280/vsftpd tcp6 0 0 :::22 :::* LISTEN 1429/sshd tcp6 0 0 ::1:25 :::* LISTEN 1944/master tcp6 0 0 :::445 :::* LISTEN 3527/smbd NFS 服务管理 使用 xserver1、xserver2 虚拟机,安装 NFS 服务所需要的软件包,将 xserver1 节点中的/mnt/share 目录使用 NFS 服务共享出来(目录不存在请自行创建,要求可访问共享目录的网段为 192.168.100.0/24),接着在 xserver2 节点上,将 xserver1中共享的文件挂载到/mnt 目录下。操作完毕后,依次将 xserver1 节点上执行showmount -e ip 命令和 xserver2 节点上执行 df -h 命令的返回结果(简要)写到下方 # xserver1 yum search nfs yum install -y nfs-utils mkdir /mnt/share man exports vi /etc/exports # 添加 /mnt/share 192.168.100.0/24(rw,sync) systemctl start nfs-server rpcbind exportfs -r showmount -e 192.168.100.11 # xserver2 yum install -y nfs-utils mount 192.168.100.11:/mnt/share /mnt/ df -h # 答题框 [root@xserver1 ~]# showmount -e 192.168.100.11 Export list for 192.168.100.11: /mnt/share 192.168.100.0/24 [root@xserver2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 36G 951M 35G 3% / devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.7M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda1 497M 114M 384M 23% /boot tmpfs 378M 0 378M 0% /run/user/0 192.168.100.11:/mnt/share 36G 7.7G 28G 22% /mnt 主从数据库管理 在xserver1、xserver2上安装mariadb数据库,并配置为主从数据库(xserver1为主节点、xserver2为从节点),实现两个数据库的主从同步。配置完毕后,请在xserver2上的数据库中执行“show slave status \G”命令查询从节点复制状态,将查询到的结果以文本形式提交到答题框。 ## xserver1 yum install -y mariadb-server systemctl start mariadb mysql_secure_installation mysql -uroot -p000000 MariaDB [(none)]> GRANT ALL privileges ON *.* TO 'root'@'%' IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT replication slave ON *.* TO 'user'@'xserver2' IDENTIFIED BY '000000'; vi /etc/my.cnf # 在第二行添加 log_bin=mysql-bin server_id=11 systemctl restart mariadb ## xserver2 yum install -y mariadb-server systemctl start mariadb mysql_secure_installation vi /etc/my.cnf # 在第二行添加 log_bin=mysql-bin server_id=12 systemctl restart mariadb mysql -uroot -p000000 MariaDB [(none)]> CHANGE MASTER TO MASTER_HOST='xserver1',MASTER_USER='user',MASTER_PASSWORD='000000'; MariaDB [(none)]> start slave; MariaDB [(none)]> show slave status \G # 答题框 MariaDB [(none)]> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: xserver1 Master_User: user Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 245 Relay_Log_File: mariadb-relay-bin.000003 Relay_Log_Pos: 529 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 245 Relay_Log_Space: 825 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 11 1 row in set (0.00 sec) 部署WordPress应用 使用xserver1节点,基于lnmp环境,部署WordPress应用(WordPress源码包在/root目录下)。应用部署完毕后,设置WordPress的站点标题为自己的姓名(例:名字叫张三,则设置站点标题为张三的BLOG),设置完毕后登录WordPresss首页。最后将命令curl ip(ip为wordpress的首页ip)的返回结果以文本形式提交到答题框。 # xserver1 vi /etc/yum.repos.d/local.repo i # 添加 [lnmp] name=lnmp baseurl=file:///root/lnmp gpgcheck=0 enabled=1 yum repolist yum install -y nginx php php-fpm php-mysql vi /etc/php-fpm.d/www.conf # 修改 39 user = nginx 41 group = nginx vi /etc/nginx/conf.d/default.conf # 修改 10 index index.php index.html index.htm; 30 location ~ \.php$ { 31 root /usr/share/nginx/html; 32 fastcgi_pass 127.0.0.1:9000; 33 fastcgi_index index.php; 34 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; 35 include fastcgi_params; 36 } systemctl start nginx php-fpm [root@xserver1 ~]# mysql -uroot -p000000 MariaDB [(none)]> create database wordpress; rm -f /usr/share/nginx/html/* yum install -y unzip unzip wordpress-4.7.3-zh_CN.zip cp -r wordpress/* /usr/share/nginx/html/ ls /usr/share/nginx/html/ chown -R nginx.nginx /usr/share/nginx/html/ cp /usr/share/nginx/html/wp-config-sample.php /usr/share/nginx/html/wp-config.php vi /usr/share/nginx/html/wp-config.php # 修改 23 define('DB_NAME', 'wordpress'); 26 define('DB_USER', 'root'); 29 define('DB_PASSWORD', '000000'); # 使用浏览访问 IP,设置标题 数字表示行号 tip: 在 vi中 :set nu 显示行号 拓展 LVM # 解题步骤: 1. 添加硬盘,并分区 2. 创建 PV 3. 创建 VG 4. 创建 LV 5. 格式化,挂载 使用xserver1虚拟机,使用VMWare软件自行添加一块大小为20G的硬盘,使用fdisk命令对该硬盘进形分区,要求分出三个大小为5G的分区。使用这三个分区,创建名xcloudvg的卷组。然后创建名xcloudlv的逻辑卷,大小为12G,再创建完之后,使用命令将逻辑卷大小缩小2G,然后查看逻辑卷信息。将上述所有操作命令和返回结果以文本形式提交到答题框。 [root@xserver2 ~]# lsblk [root@xserver2 ~]# fdisk /dev/sdb # n 默认回车 大小设置:+5G 重复三次 [root@xserver2 ~]# pvcreate /dev/sdb[1-3] [root@xserver2 ~]# vgcreate xcloudvg /dev/sdb[1-3] [root@xserver2 ~]# lvcreate -L 12G -n xcloudlv xcloudvg [root@xserver2 ~]# lvreduce -L -2G /dev/mapper/xcloudvg-xcloudlv [root@xserver2 ~]# lvdisplay 使用VMware软件和提供的CentOS-7-x86_64-DVD-1511.iso创建虚拟机,自行配置好网络并多添加一块大小为20G的硬盘,使用fdisk命令对该硬盘进形分区,要求分出三个大小为5G的分区。使用这三个分区,创建名xcloudvg的卷组。然后创建名xcloudlv的逻辑卷,大小为12G,最后用xfs文件系统对逻辑卷进行格式化并挂载到/mnt目录下。将上述所有操作命令和返回结果以文本形式提交到答题框。 [root@xserver2 ~]# lsblk [root@xserver2 ~]# fdisk /dev/sdb # n 默认回车 大小设置:+5G 重复三次 [root@xserver2 ~]# pvcreate /dev/sdb1 /dev/sdb2 /dev/sdb3 [root@xserver2 ~]# vgcreate xcloudvg /dev/sdb[1-3] [root@xserver2 ~]# lvcreate -L 12G -n xcloudlv xcloudvg [root@xserver2 ~]# mkfs.xfs /dev/xcloudvg/xcloudlv [root@xserver2 ~]# mkdir /data [root@xserver2 ~]# mount -o loop /dev/xcloudvg/xcloudlv /mnt [root@xserver2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 36G 872M 35G 3% / devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.6M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda1 497M 114M 384M 23% /boot tmpfs 378M 0 378M 0% /run/user/0 /dev/loop0 12G 33M 12G 1% /mnt 使用xserver1虚拟机,使用VMWare软件自行添加一块大小为20G的硬盘,使用fdisk命令对该硬盘进形分区,要求分出两个大小为5G的分区。使用两个分区,创建名xcloudvg的卷组并指定PE大小为16 MB。将执行vgdisplay命令的返回结果以文本形式提交到答题框。 [root@xserver1 ~]# fdisk /dev/sdb # n 默认回车 大小设置:+5G 重复三次 [root@xserver1 ~]# pvcreate /dev/sdb1 /dev/sdb2 [root@xserver1 ~]# vgcreate xcloudvg -s 16 /dev/sdb1 /dev/sdb2 [root@xserver1 ~]# vgdisplay 数据库管理 使用提供的“all-in-one”虚拟机,进入数据库。 (1)创建本地用户examuser,密码为000000; (2)查询mysql数据库中的user表的host,user,password字段;(3)赋予这个用户对所有数据库拥有“查询”“删除”“更新”“创建”的本地权限。依次将操作命令和返回结果以文本形式提交到答题框。 [root@xserver1 ~]# mysql -uroot -p000000 MariaDB [mysql]> create user examuser@localhost identified by '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> use mysql MariaDB [mysql]> select host,user,password from user; +-----------+----------+-------------------------------------------+ | host | user | password | +-----------+----------+-------------------------------------------+ | localhost | root | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | xserver1 | root | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | 127.0.0.1 | root | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | ::1 | root | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | % | root | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | xserver2 | user | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | localhost | examuser | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | +-----------+----------+-------------------------------------------+ 7 rows in set (0.00 sec) MariaDB [mysql]> grant SELECT,DELETE,UPDATE,CREATE on *.* to examuser@localhost; Query OK, 0 rows affected (0.00 sec) MariaDB管理 使用VMware软件和提供的CentOS-7-x86_64-DVD-1511.iso创建虚拟机,自行配置好网络和YUM源,安装mariadb数据库,安装完毕后登数据库,查询当前系统的时间和用户。依次将操作命令和返回结果以文本形式提交到答题框。(数据库用户名root,密码000000;关于数据库的命令均使用小写) [root@xserver1 ~]# mysql -uroot -p000000 MariaDB [(none)]> select user() ,now(); +----------------+---------------------+ | user() | now() | +----------------+---------------------+ | root@localhost | 2021-11-17 15:05:08 | +----------------+---------------------+ 1 row in set (0.00 sec) Openstack运维 OpenStack Keystone管理 使用提供的“all-in-one”虚拟机,创建用户testuser,密码为xiandian,将testuser用户分配给admin项目,赋予用户admin的权限。依次将操作命令和查询结果以文本形式提交到答题框。 [root@controller ~]# openstack user create --password xiandian --project admin --domain xiandian testuser +--------------------+----------------------------------+ | Field | Value | +--------------------+----------------------------------+ | default_project_id | f9ff39ba9daa4e5a8fee1fc50e2d2b34 | | domain_id | 9321f21a94ef4f85993e92a228892418 | | enabled | True | | id | 806b2c56147e41699ddd89af955a6f20 | | name | testuser | +--------------------+----------------------------------+ OpenStack Glance管理 使用VMWare软件启动提供的opensatckallinone镜像,自行检查openstack中各服务的状态,若有问题自行排查。在xserver1节点的/root目录下存在一个cirros-0.3.4-x86_64-disk.img镜像;使用glance命令将镜像上传,并命名为mycirros,最后将glance image-show id命令的返回结果以文本形式提交到答题框。 [root@controller ~]# scp 192.168.100.11:/root/cirros-0.3.4-x86_64-disk.img . [root@controller ~]# glance image-create --name mycirros --disk-format qcow2 --container-format bare --progress < cirros-0.3.4-x86_64-disk.img [=============================>] 100% +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2021-11-24T15:54:45Z | | disk_format | qcow2 | | id | 45d98d72-b23f-464d-b66e-13f154eed216 | | min_disk | 0 | | min_ram | 0 | | name | mycirros | | owner | f9ff39ba9daa4e5a8fee1fc50e2d2b34 | | protected | False | | size | 13287936 | | status | active | | tags | [] | | updated_at | 2021-11-24T15:54:45Z | | virtual_size | None | | visibility | private | +------------------+--------------------------------------+ [root@controller ~]# glance image-show 45d98d72-b23f-464d-b66e-13f154eed216 +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2021-11-24T15:54:45Z | | disk_format | qcow2 | | id | 45d98d72-b23f-464d-b66e-13f154eed216 | | min_disk | 0 | | min_ram | 0 | | name | mycirros | | owner | f9ff39ba9daa4e5a8fee1fc50e2d2b34 | | protected | False | | size | 13287936 | | status | active | | tags | [] | | updated_at | 2021-11-24T15:54:45Z | | virtual_size | None | | visibility | private | +------------------+--------------------------------------+ OpenStack Cinder管理 使用VMWare软件启动提供的opensatckallinone镜像,自行检查openstack中各服务的状态,若有问题自行排查。使用Cinder服务,创建名为“ lvm”的卷类型,然后创建一块带“lvm” 标识的云硬盘,名称为 BlockVloume,大小为 2G,查询该云硬盘详细信息。完成后,将cinder show BlockVloume命令的返回结果以文本形式提交到答题框。 [root@controller ~]# cinder type-create lvm +--------------------------------------+------+-------------+-----------+ | ID | Name | Description | Is_Public | +--------------------------------------+------+-------------+-----------+ | caad5c75-59e3-4167-ad15-19e9f27cee6f | lvm | - | True | +--------------------------------------+------+-------------+-----------+ [root@controller ~]# cinder create --name BlockVloume --volume-type lvm 2 +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2021-11-24T15:57:22.000000 | | description | None | | encrypted | False | | id | d013e4b2-b0b8-41b4-95e3-2bd47f3431fa | | metadata | {} | | migration_status | None | | multiattach | False | | name | BlockVloume | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | f9ff39ba9daa4e5a8fee1fc50e2d2b34 | | replication_status | disabled | | size | 2 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | 0befa70f767848e39df8224107b71858 | | volume_type | lvm | +--------------------------------+--------------------------------------+ [root@controller ~]# cinder show d013e4b2-b0b8-41b4-95e3-2bd47f3431fa +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2021-11-24T15:57:22.000000 | | description | None | | encrypted | False | | id | d013e4b2-b0b8-41b4-95e3-2bd47f3431fa | | metadata | {} | | migration_status | None | | multiattach | False | | name | BlockVloume | | os-vol-host-attr:host | controller@lvm#LVM | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | f9ff39ba9daa4e5a8fee1fc50e2d2b34 | | replication_status | disabled | | size | 2 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2021-11-24T15:57:23.000000 | | user_id | 0befa70f767848e39df8224107b71858 | | volume_type | lvm | +--------------------------------+--------------------------------------+ OpenStack Nova管理 使用提供的“all-in-one”虚拟机,通过nova的相关命令创建名为exam,ID为1234,内存为1024M,硬盘为20G,虚拟内核数量为2的云主机类型,查看exam的详细信息。依次将操作命令及返回结果以文本形式提交到答题框。 [root@controller ~]# nova flavor-create exam 1234 1024 20 2 +------+------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +------+------+-----------+------+-----------+------+-------+-------------+-----------+ | 1234 | exam | 1024 | 20 | 0 | | 2 | 1.0 | True | +------+------+-----------+------+-----------+------+-------+-------------+-----------+ [root@controller ~]# nova flavor-show exam +----------------------------+-------+ | Property | Value | +----------------------------+-------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 20 | | extra_specs | {} | | id | 1234 | | name | exam | | os-flavor-access:is_public | True | | ram | 1024 | | rxtx_factor | 1.0 | | swap | | | vcpus | 2 | +----------------------------+-------+ OpenStack Nova管理 使用VMWare软件启动提供的opensatckallinone镜像,自行检查openstack中各服务的状态,若有问题自行排查。使用nova相关命令,查询nova所有的监控列表,并查看ID为1的监控主机的详细信息,将操作命令和返回结果以文本形式提交到答题框。 [root@controller ~]# nova hypervisor-list +----+---------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+---------------------+-------+---------+ | 1 | controller | up | enabled | +----+---------------------+-------+---------+ [root@controller ~]# nova hypervisor-show 1 +---------------------------+------------------------------------------+ | Property | Value | +---------------------------+------------------------------------------+ | cpu_info_arch | x86_64 | | cpu_info_features | ["smap", "avx", "clflush", "sep", | | | "syscall", "vme", "invpcid", "tsc", | | | "fsgsbase", "xsave", "pge", "vmx", | | | "erms", "cmov", "smep", "pcid", "pat", | | | "lm", "msr", "adx", "3dnowprefetch", | | | "nx", "fxsr", "sse4.1", "pae", "sse4.2", | | | "pclmuldq", "fma", "tsc-deadline", | | | "mmx", "osxsave", "cx8", "mce", "de", | | | "rdtscp", "ht", "pse", "lahf_lm", "abm", | | | "rdseed", "popcnt", "mca", "pdpe1gb", | | | "apic", "sse", "f16c", "mpx", "invtsc", | | | "pni", "aes", "avx2", "sse2", "ss", | | | "hypervisor", "bmi1", "bmi2", "ssse3", | | | "fpu", "cx16", "pse36", "mtrr", "movbe", | | | "rdrand", "x2apic"] | | cpu_info_model | Broadwell-noTSX | | cpu_info_topology_cells | 1 | | cpu_info_topology_cores | 2 | | cpu_info_topology_sockets | 1 | | cpu_info_topology_threads | 1 | | cpu_info_vendor | Intel | | current_workload | 0 | | disk_available_least | 31 | | free_disk_gb | 34 | | free_ram_mb | 7296 | | host_ip | 192.168.100.10 | | hypervisor_hostname | controller | | hypervisor_type | QEMU | | hypervisor_version | 2003000 | | id | 1 | | local_gb | 34 | | local_gb_used | 0 | | memory_mb | 7808 | | memory_mb_used | 512 | | running_vms | 0 | | service_disabled_reason | None | | service_host | controller | | service_id | 6 | | state | up | | status | enabled | | vcpus | 2 | | vcpus_used | 0 | +---------------------------+------------------------------------------+ OpenStack Swift管理 使用VMWare软件启动提供的opensatckallinone镜像,自行检查openstack中各服务的状态,若有问题自行排查。使用swift相关命令,创建一个名叫examcontainer的容器,然后往这个容器中上传一个test.txt的文件(文件可以自行创建),上传完毕后,使用命令查看容器,将操作命令和返回结果以文本形式提交到答题框。(40)分 [root@controller ~]# swift post examcontainer [root@controller ~]# touch test.txt [root@controller ~]# swift upload examcontainer test.txt test.txt [root@controller ~]# swift list examcontainer [root@controller ~]# swift stat examcontainer Docker Docker安装 使用xserver1节点,自行配置YUM源,安装docker服务(需要用到的包为xserver1节点/root目录下的Docker.tar.gz)。安装完服务后,将registry_latest.tar上传到xserver1节点中并配置为私有仓库。要求启动registry容器时,将内部保存文件的目录映射到外部的/opt/registry目录,将内部的5000端口映射到外部5000端口。依次将启动registry容器的命令及返回结果、执行docker info命令的返回结果以文本形式提交到答题框。 [root@xserver1 ~]# tar xfv Docker.tar.gz [root@xserver1 ~]# vi /etc/yum.repos.d/local.repo # 添加 [docker] name=docker baseurl=file:///root/Docker gpgcheck=0 enabled=1 [root@xserver1 ~]# yum repolist [root@xserver1 ~]# yum install -y docker-ce [root@xserver1 ~]# systemctl daemon-reload [root@xserver1 ~]# systemctl start docker [root@xserver1 ~]# ./image.sh [root@xserver1 ~]# vi /etc/docker/daemon.json # 添加 { "insecure-registries": ["192.168.100.11:5000"] } [root@xserver1 ~]# systemctl restart docker [root@xserver1 ~]# docker run -d --restart=always -v /opt/registry/:/var/lib/registry -p 5000:5000 registry:latest 950319053db0f90eef33b33ce3bc76a93d3b6cf4b17c27b532d23e8aa9ef7bb9 [root@xserver1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 950319053db0 registry:latest "/entrypoint.sh /etc…" 19 seconds ago Up 18 seconds 0.0.0.0:5000->5000/tcp xenodochial_johnson [root@xserver1 ~]# docker info # 答题框 [root@xserver1 ~]# docker info Containers: 1 Running: 0 Paused: 0 Stopped: 1 Images: 11 Server Version: 18.09.6 Storage Driver: devicemapper Pool Name: docker-253:0-67486317-pool Pool Blocksize: 65.54kB Base Device Size: 10.74GB Backing Filesystem: xfs Udev Sync Supported: true Data file: /dev/loop1 Metadata file: /dev/loop2 Data loop file: /var/lib/docker/devicemapper/devicemapper/data Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata Data Space Used: 1.339GB Data Space Total: 107.4GB Data Space Available: 26.1GB Metadata Space Used: 2.077MB Metadata Space Total: 2.147GB Metadata Space Available: 2.145GB Thin Pool Minimum Free Space: 10.74GB Deferred Removal Enabled: true Deferred Deletion Enabled: true Deferred Deleted Device Count: 0 Library Version: 1.02.107-RHEL7 (2015-10-14) Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339 runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657 init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 3.10.0-327.el7.x86_64 Operating System: CentOS Linux 7 (Core) OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 3.688GiB Name: xserver1 ID: VXSJ:7YFV:GWBV:2E3E:LAIM:EQ36:MRCD:Q6BG:IRBS:HJXU:K4ZD:E377 Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false Product License: Community Engine WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release. WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device. [root@xserver1 ~]# docker run -d -v /opt/registry:/var/lib/registry -p 5000:5000 registry:latest b45c91876af54542dbc15706b7cb037e5b716068f7d33bba42df1f874d3d6bbf Docker运维 使用xserver1节点,上传nginx_latest.tar到xserver1节点中,然后将该镜像打标签,上传至私有仓库。使用xserver2节点,自行安装docker服务,配置xserver2节点使用xserver1的私有仓库,配置完毕后,在xserver2节点拉取nginx_latest.tar镜像。最后将在xserver2上执行docker images命令返回的结果以文本形式提交到答题框。 [root@xserver1 ~]# cd images [root@xserver1 images]# docker load -i nginx_latest.tar Loaded image: nginx:latest [root@xserver1 images]# docker tag nginx:latest 192.168.100.11:5000/nginx:latest [root@xserver1 images]# docker push 192.168.100.11:5000/nginx:latest The push refers to repository [192.168.100.11:5000/nginx] a89b8f05da3a: Pushed 6eaad811af02: Pushed b67d19e65ef6: Pushed latest: digest: sha256:f56b43e9913cef097f246d65119df4eda1d61670f7f2ab720831a01f66f6ff9c size: 948 [root@xserver1 ~]# cp -r Docker /opt/ [root@xserver2 ~]# vi /etc/yum.repos.d/ftp.repo # 添加 [docker] name=docker baseurl=ftp://xserver1/Docker gpgcheck=0 enabled=1 [root@xserver2 ~]# yum repolist [root@xserver2 ~]# yum install -y docker-ce [root@xserver2 ~]#systemctl daemon-reload [root@xserver2 ~]# systemctl start docker [root@xserver2 ~]# vi /etc/docker/daemon.json # 添加 { "insecure-registries": ["192.168.100.11:5000"] } [root@xserver2 ~]# systemctl restart docker [root@xserver2 ~]# docker pull 192.168.100.11:5000/nginx:latest [root@xserver2 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE 192.168.100.11:5000/nginx latest 540a289bab6c 2 years ago 126MB Docker管理 假设当前存在docker镜像tomcat:latest,现在将tomcat镜像导出,导出名称为tomcat_images.tar,放在/media目录下,将以上操作命令填入答题框。 [root@xserver2 ~]# docker tag 192.168.100.11:5000/nginx:latest tomcat:latest [root@xserver2 ~]# docker save tomcat:latest > /media/tomcat_images.tar Dockerfile编写 使用xserver1节点,新建httpd目录,然后编写Dockerfile文件,要求如下:1)使用centos:latest镜像作为基础镜像;2)作者为xiandian;3)Dockerfile要求删除镜像的yum源,使用当前系统的local.repo源文件;4)安装http服务;5)暴露80端口。编写完毕后,构建的镜像名字叫httpd:v1.0的镜像。完成后将Dockerfile文件和镜像列表信息以文本形式提交到答题框。 [root@xserver1 ~]# mkdir httpd [root@xserver1 ~]# cd httpd/ [root@xserver1 httpd]# vi Dockerfile FROM centos:latest MAINTAINER xiandian RUN rm -f /etc/yum.repos.d/* COPY local.repo /etc/yum.repos.d/ RUN yum install -y httpd EXPOSE 80 [root@xserver1 httpd]# cat local.repo [centos] name=centos baseurl=ftp://192.168.100.11/centos gpgcheck=0 enabled=1 [root@xserver1 httpd]# docker build -t httpd:v1.0 . [root@xserver1 httpd]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE httpd v1 4ff5ef091914 19 seconds ago 220MB Dockerfile 编写 使用 xserver1 节点,新建目录 centos-jdk,将提供的 jdk-8u141-linux-x64.tar.gz 复制新建的目录,然后编辑 Dockerfile 文件,文件要求如下: 1.使用 centos:latest 基础镜像; 2.指定作者为 xiandian; 3.新建文件夹/usr/local/java 用于存放 jdk 文件; 4.将 JDK 文件复制到镜像内创建的目录并自动解压; 5.创建软连接:ln -s /usr/local/java/jdk1.8.0_141 /usr/local/java/jdk; 6.设置环境变量如下 ENV JAVA_HOME /usr/local/java/jdk ENV JRE_HOME ${JAVA_HOME}/jre ENV CLASSPATH .:${JAVA_HOME}/lib:${JRE_HOME}/lib ENV PATH ${JAVA_HOME}/bin:$PATH 编写完毕后,构建名为 centos-jdk 的镜像,构建成功后,查看镜像列表。 最后将 Dockerfile 的内容、构建镜像的操作命令、查看镜像列表的命令和返回的(简要)写到下方。 [root@xserver1 ~]# mkdir centos-jdk [root@xserver1 ~]# cd centos-jdk/ [root@xserver1 centos-jdk]# cat Dockerfile FROM centos:latest MAINTAINER xiandian WORKDIR /usr/local/java ADD jdk-8u141-linux-x64.tar.gz /usr/local/java RUN ln -s /usr/local/java/jdk1.8.0_141 /usr/local/java/jdk ENV JAVA_HOME /usr/local/java/jdk ENV JRE_HOME ${JAVA_HOME}/jre ENV CLASSPATH .:${JAVA_HOME}/lib:${JRE_HOME}/lib ENV PATH ${JAVA_HOME}/bin:$PATH [root@xserver1 centos-jdk]# cp /root/jdk/jdk-8u141-linux-x64.tar.gz . [root@xserver1 centos-jdk]# docker build -t centos-jdk:v1 . Sending build context to Docker daemon 185.5MB Step 1/9 : FROM centos:latest ---> 0f3e07c0138f Step 2/9 : MAINTAINER xiandian ---> Using cache ---> 380ab4775829 Step 3/9 : WORKDIR /usr/local/java ---> Running in 3d042489bc79 Removing intermediate container 3d042489bc79 ---> 0ca117e514c2 Step 4/9 : ADD jdk-8u141-linux-x64.tar.gz /usr/local/java ---> a46b588b978d Step 5/9 : RUN ln -s /usr/local/java/jdk1.8.0_141 /usr/local/java/jdk ---> Running in 5a6643a9a5cb Removing intermediate container 5a6643a9a5cb ---> 0bcbb2fbe73f Step 6/9 : ENV JAVA_HOME /usr/local/java/jdk ---> Running in 7522f805f7d4 Removing intermediate container 7522f805f7d4 ---> 3d366a42f998 Step 7/9 : ENV JRE_HOME ${JAVA_HOME}/jre ---> Running in b550575dc17e Removing intermediate container b550575dc17e ---> e0df108c1aed Step 8/9 : ENV CLASSPATH .:${JAVA_HOME}/lib:${JRE_HOME}/lib ---> Running in 1b439c9f4e25 Removing intermediate container 1b439c9f4e25 ---> bb89ab740e4e Step 9/9 : ENV PATH ${JAVA_HOME}/bin:$PATH ---> Running in d18c59ac844d Removing intermediate container d18c59ac844d ---> 59d4683996a7 Successfully built 59d4683996a7 Successfully tagged centos-jdk:v1 [root@xserver1 centos-jdk]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE centos-jdk v1 3ca44b341427 7 seconds ago 596MB 部署Swarm集群 使用xserver1、xserver2节点,自行配置好网络,安装好docker-ce。部署Swarm集群,并安装Portainer图形化管理工具,部署完成后,使用浏览器登录ip:9000界面,进入Swarm控制台。将curl swarm ip:9000返回的结果以文本形式提交到答题框。 [root@xserver1 ~]# docker swarm init --advertise-addr 192.168.100.11 Swarm initialized: current node (z9kmpvwdd1mcytddlr7qxfqz3) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-31hn156p86fr6eo720rq9mqafckpw1who0i39dxv9m3tbpjd0l-86zun6fgtb3b56iguf91luoge 192.168.100.11:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. [root@xserver2 ~]# docker swarm join --token SWMTKN-1-31hn156p86fr6eo720rq9mqafckpw1who0i39dxv9m3tbpjd0l-86zun6fgtb3b56iguf91luoge 192.168.100.11:2377 This node joined a swarm as a worker. [root@xserver1 centos-jdk]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION 80cisraj17qq1tuawca29a8b1 * xserver1 Ready Active Leader 18.09.6 dsot82834vazufdo3x9y79lgj xserver2 Ready Active 18.09.6 [root@xserver1 ~]# docker tag 4cda95efb0e4 portainer/portainer:latest [root@xserver1 ~]# docker service create \ --name portainer \ --publish 9000:9000 \ --replicas=1 \ --constraint 'node.role == manager' \ --mount type=bind,src=//var/run/docker.sock,dst=/var/run/docker.sock \ --mount type=volume,src=portainer_data,dst=/data \ portainer/portainer -H unix:///var/run/docker.sock # 访问浏览器 访问IP:9000,设置密码
2021年12月04日
88 阅读
0 评论
0 点赞
1
2
3
4