如何实现OpenStack中nova(vmware虚拟化)和neturon的联动?

公司准备搭建一套私有云平台,使用开源的OpenStack作为管理平台。计算和存储采用超融合架构,计算使用vmware,对外通过VC提供接口,存储采用分布式存储;网络采用硬件SDN解决方案,通过SDN控制器和OpenStack的neutron模块进行对接。使用这个解决方案时,如何保证其Neutron模块和Nova模块的联动?

参与8

3同行回答

GaryyGaryy系统工程师某保险
OVSvApp Solution for ESX based deploymentsWhen a cloud operator wants to use OpenStack with vSphere using open source elements, he/she can only do so by relying on nova-network. Currently, there is no viable open source reference implementation for s...显示全部

OVSvApp Solution for ESX based deployments
When a cloud operator wants to use OpenStack with vSphere using open source elements, he/she can only do so by relying on nova-network. Currently, there is no viable open source reference implementation for supporting ESX deployments that would help the cloud operator to leverage some of the advanced networking capabilities that Neutron provides. Here, we talk about providing cloud operators with a Neutron supported solution for vSphere deployments in the form of a service VM called OVSvApp VM which steers the ESX tenant VMs' traffic through it. The value-add with this solution is a faster deployment of solutions on ESX environments together with minimum effort required for adding new OpenStack features like DVR, LBaaS, VPNaaS etc. To address the above challenge, the OVSvApp solution allows the customers to host VMs on ESX/ESXi hypervisors together with the flexibility of creating port groups dynamically on Distributed Virtual Switch/Virtual Standard Switch, and then steer its traffic through the OVSvApp VM which provides VLAN & VXLAN underlying infrastructure for tenant VMs communication and Security Group features based on OpenStack.

OVSvApp Benefits:

Allows vendors to migrate their invested ESX workloads to a cloud.
Allows vendors to deploy ESX-based Clouds with native OpenStack, with less (or) no learning curve.
Allows vendors to leverage some of the advanced networking capabilities that Neutron provides.
Not required to rely on nova-network (which is deprecated).
Does not require special licenses from any vendors to deploy, run and manage.
Aligned to OpenStack Kilo release.
Available upstream in OpenStack Neutron under project “openstack/networking-vsphere” https://github.com/openstack/networking-vsphere

zaw88g10k8t

zaw88g10k8t

OVSvApp VM:

OVSvApp solution comprises of a service VM called OVSvApp VM (in Blue colored) hosted on each ESXi hypervisor within a cluster and two vSphere Distributed Switches (VDS). OVSvApp VM runs Ubuntu 12.04 LTS or above as a guest Operating System and has Open vSwitch 2.1.0 or above installed. It also runs an agent called OVSvApp agent.

For VLAN provisioning, two VDS per datacenter is required and for VXLAN, two VDS per cluster is required. The 1st VDS (named vSphere Distributed Switch 1) do not need any uplinks implying no external network connectivity, but will provide connectivity to tenant VMs and OVSvApp VM. Each tenant VM is associated with a port group (VLAN). The tenant VMs’ data traffic reaches the OVSvApp VM via their respective port group and hence through another port group called Trunk Port group (Port group defined with “VLAN type” as “VLAN Trunking” and “VLAN trunk range” set with range of tenant VMs’ traffic - VLAN ranges, exclusive of management VLAN as explained below) with which OVSvApp VM is associated.

The 2nd VDS (named vSphere Distributed Switch 2) has one or two uplinks and provides management and data connectivity to OVSvApp VM. The OVSvApp VM is also associated with other two port groups namely Management Port group (Port group defined with “VLAN type” as “None” OR “VLAN” with specific VLAN Id in “VLAN ID” OR “VLAN Trunking” with “VLAN trunk range” set with range of management VLANs) and Data Port group (Port group defined with “VLAN type” as “VLAN Trunking” and “VLAN trunk range” set with range of tenant VMs’ traffic - VLAN ranges, exclusive of management VLAN). Management VLAN and Data VLANs can share the same uplink or can be on different uplinks and those uplink ports can be a part of the same VDS or can it can be on separate VDS.

Nova Compute:

The nova compute is the nova-compute service for ESX. Only one instance of this service need to run for entire ESX deployment (not like KVM where nova-compute service needs to run on every KVM Host). This single instance of nova-compute service can be run either in the OpenStack controller node itself (or any other service node in your cloud). The nova-compute comprises of OVSvApp Nova VCDriver that is customized for OVSvApp Solution.

Neutron Server:

The neutron server provides the tenant Network and Port information to OVSvApp Agent. It contains the OVSvApp thin Ml2 Mechanism driver.
ku1ks2k6z6g

ku1ks2k6z6g

OVSvApp VM which runs OVSvApp L2 Agent waits for cluster events like "VM_CREATE", "VM_DELETE" and "VM_UPDATE" from vCenter Server and acts accordingly. OVSvApp agent also communicates with the Neutron Server to get information like port details per VM and Security Group rules associated with each port of a VM from the Neutron Server to program the OpenvSwitch within OVSvApp VM with FLOWs.

OpenvSwitch comprises of three OVS Bridges namely Security Bridge(br-sec), Integration Bridge(br-int) and Physical Connectivity Bridge(br-ethx) in case of VLAN, and Tunnel Bridge(br-tun) in case of VXLAN.

Security Bridge (br-sec) receives tenant VM traffic where Security group rules are applied at VM port level. It contains Open vSwitch FLOWs based on the tenant’s OpenStack Security Group rules which will either allow/block the traffic from the tenant VMs. Open vSwitch based Firewall Driver is used to accomplish Security Groups functionality, similar to iptable Firewall Driver used in KVM compute nodes.

Integration Bridge (br-int) connects Security Bridge and Physical Connectivity OR Tunnel Bridge. The reason to have Integration Bridge is to leverage existing OpenStack Open vSwitch L2 agent feature to a maximum.

Physical Connectivity Bridge (br-ethx) provides connectivity (VLAN provisioning) to the physical network interface cards.

Tunnel Bridge (br-tun) is used for establishing VXLAN tunnels to forward tenant traffic on the network.

收起
保险 · 2018-09-20
浏览4033
Henry2017Henry2017研发工程师金融行业
联动是指VMware使用neutron提供的网络是吧?这个在openstack的纳管中完全可以实现,nova的后端driver使用VMware,网络依然从neutron分配,不会有影响,至于sdn,这个在neutron中的插件driver实现就可以了。...显示全部

联动是指VMware使用neutron提供的网络是吧?
这个在openstack的纳管中完全可以实现,nova的后端driver使用VMware,网络依然从neutron分配,不会有影响,至于sdn,这个在neutron中的插件driver实现就可以了。

收起
金融其它 · 2018-09-21
浏览3480
大天使之剑大天使之剑售前技术支持杭州才云科技
注意相关组件的配置文件中中的drivier设置并测试是否达到预期显示全部

注意相关组件的配置文件中中的drivier设置
并测试是否达到预期

收起
互联网服务 · 2018-09-18
浏览3514

提问者

Garyy
Garyy0410
系统工程师某保险
擅长领域: 云计算存储容器

问题来自

相关问题

相关资料

相关文章

问题状态

  • 发布时间:2018-09-18
  • 关注会员:3 人
  • 问题浏览:5424
  • 最近回答:2018-09-21
  • X社区推广