Skip to main content
Category

Blog

ONAP Casablanca: The Story Behind the Code

By Blog

Congratulations to the entire ONAP community and extended ecosystem on the availability of ONAP Casablanca, the project’s third release! We have come such a long way since ONAP’s first code release, Amsterdam. With a thriving community of more than 492 developers hailing from 31 organizations, ONAP’s growing global diversity has come together to deliver an even broader set of features and enhancements that make ONAP even more suitable for global deployment.

Read on for more details on what’s in the release, as well as a brief Q&A with ONAP’s new Technical Steering Committee (TSC) Chair, Catherine Lefevre.

New Features & Functionality
Casablanca introduces new functionality with two use cases important to the evolution of networking: 5G and CCVPN (Cross Domain and Cross Layer VPN).

  • The 5G blueprint is a multi-release effort, with Casablanca introducing the first set of capabilities around PNF integration, edge automation, real-time analytics, network slicing, data modeling, homing, scaling, and network optimization.
  • CCVPN demonstrates how to provide enterprise services across operators with the use of MEF APIs. CCVPN was first introduced onstage during ONS Europe with the help of China Mobile, Vodafone, and Huawei. (You can read about the ONS demo here)

Casablanca also includes new features, architectural changes, deployability enhancements (see the “7 Dimensions of Deployability”) and bug fixes. Some of my favorite highlights include:

  • The design time environment includes two new dashboards to simplify design activities.
  • The runtime environment includes new lifecycle management functions in both the Service Orchestrator (SO) and its three controllers, expanded hardware platform awareness (HPA) to improve performance, geo-redundancy, support for ETSI NFV-SOL003 for VNFM compatibility, MultiCloud enhancements, and edge cloud onboarding.
  • Additionally, the initial integration with the PNDA project, several new collectors, policy engine updates, and enhancements to the Holmes alarm correlation engine, boost ONAP’s service assurance capabilities.

Community Growth
We’re also very excited to see strong growth among the ONAP community. With an expanded community, we’ve been able to develop more features in less time! For comparison’s sake, the number of contributing CSPs, vendors, and other organizations has increased to 31 up from 24, and Casablanca includes contributions from 490 developers, up from 452 compared to the Beijing release. Equally important, the community has expanded beyond technical concerns to collaborate with other open source projects such as OPNFV, CNCF, and PNDA, as well as standards communities such as ETSI, MEF, and TMForum.

We also plan to integrate more closely with OPNFV on the compliance & verification program, which will extend to VNFs following Casablanca. The program will allow VNF vendors to test their products using a standardized test suite and receive a verification badge.

To get more color on how the community came together for this release, I sat down with ONAP TSC Chair, Catherine Lefevre. Here’s what she had to say:

Q&A with Catherine Lefevre, ONAP TSC Chair

Can you share any anecdotes about the release and community coming together? We heard rumblings of a great conversation in the LFN booth at ONS Europe. Please elaborate.
While some of our ONAP members were engaged in talks and demos at the ONS Europe, I took the opportunity to organize several meetings. The purpose was to collect feedback from the community to better understand what their expectations are related to the new ONAP TSC. The number of participants turned out to be larger than anticipated and was really energizing, highlighting improvements in terms of technical leadership, communication, documentation, consistency across the platform, plus much more. This feedback became part of our TSC roadmap. An “ONAP TSC” project was created in JIRA (https://jira.onap.org/projects/TSC/), allowing us to track our TSC action items to drive change and accountability based on the feedback from the community. We wanted to manage TSC in the same way we manage any new ONAP feature.

If you had to summarize the release in 3-5 sentences, how would you describe it?
The goal of Casablanca was to consolidate our projects foundation while evolving to modularity and aligning to industry standards i.e. MEF 3.0, ETSI NFV-SOL003, etc.

On the deployment side, we made great progress in streamlining the ONAP installation using Kubernetes, adding a broader range of physical/virtual storage options, enhancements to backup/restore, etc.

On the design time side, two additional artifacts were added to continue driving the unified design palette to help product managers, VNF owners, and anyone interested to build on ONAP (DCAE Design Studio – control loop workflow & Workflow designer for orchestration workflow)

On the run time side, we increased the service assurance footprint by adding new High Volume VES (Virtual Event Stream) collectors and performed initial integration with the Linux Foundation PNDA project in DCAE (Data Collection Analytics & Events). PNDA containerization, application onboarding, and deployment are currently targeted for Dublin. We also developed new functionalities to support physical network functions (PNFs) and audit capabilities through A&AI (Active & Available Inventory).

On the Security side, we integrated many of the ONAP components with Application Authorization Framework (AAF) and increased the CII badging compliancy.

Casablanca provides blueprint updates to 5G and CCVPN as well as a sneak peek of compliance and verification updates related to VNF testing. Can you explain how this will benefit the ecosystem? Why are these developments significant?

ONAP 5G & Cross Domain Cross Layer VPN (CCVPN) Use cases are definitely the cornerstones of the Casablanca release.

The ONAP CCVPN Use case exercises many aspects of ONAP by building a high-bandwidth, flat and super-speed Optical Transport Network between two carriers and across multiple domains.

The ONAP 5G blueprint starts to address two of 5G challenges: Network optimization and the extension of Zero Touch Orchestration/Automation to Radio Access Networks.

These use cases, 5G and CCVPN, demonstrate the willingness of carriers to work together on common requirements. The ONAP community is a fantastic Software Defined Network platform for academic research, proof of concepts into complex topics, benefiting from a lot of technical expertise and highlighting the desire of the ONAP community to develop a platform that focus on building a future together.

What makes this release unique, and  what is your favorite thing about Casablanca or the way in which the community came together?
The acceleration of the ONAP community diversity highlighted by the rise of VNF and 3rd party vendors contributing to the ONAP Casablanca release.  They also now have a better understand the source code and are able to develop themselves.

The scope of the Casablanca release was substantially larger in comparison to the Beijing release while our testing capacity did not increase respectively. The last 3 weeks were challenging, but we had a great Integration team with a “never give-up” mindset and a group of Project Technical leads who acted as a single team. This is really my favorite part, having a large team coming together, focused on a shared goal, working collaboratively to achieve the impossible –  the power of the team spirit!

Now that Casablanca is out the door, what are you anticipating for ONAP’s next release?
We plan to have a minor release in early February 2019 to address some security and code enhancements.

The scope of our first 2019 major release, Dublin, is not yet finalized; however, the ONAP TSC has identified several guiding principles they would like to put in place:

  • Pursue our Continuous integration / Continuous Deployment Journey to ensure development issues are addressed quickly by leveraging more automation
  • Security by Design – re-enforcement of security awareness at each milestone of the release; not only at code freeze.
  • Document as You Code – dedicated focus on improving the documentation all along the release cycle.

Stay tuned for more to come in the coming weeks …

Join us to help shape the next ONAP release!

Dublin Release Developer Design Forum: The next design forum will be conducted jointly with the OPNFV Gambia Plugfest in Paris, France from January 8-11, 2019. The event, as the name suggests, will focus on Dublin release planning and explore various synergies with OPNFV. Both members and non-members are welcome to attend. More info here: https://events.linuxfoundation.org/events/onap-ddf-opnfv-plugfest/.

vCPE Blueprint in ONAP

By Blog

This post originally appeared on Arana Networks. Republished with permission.

This blog explains deployment details (using TOSCA/HEAT templates) of some of the important services of the vCPE blueprint in ONAP. It assumes that the reader is familiar with vCPE use case (for which there are several blogs/video available, including a free book from Aarna Networks — ONAP Demystified, or the ONAP Confluence page).

The following block diagram provides an overview of the end to end service of vCPE, and how various constituent services are linked together.

vCPE end to end use case comprises of several services (some of which are optional, and will be replaced by equivalent services already existing in CSP’s environment), each of which contains one or more VNF’s and/or VL’s.

  1. vCPE General Infra Service

  2. vG MUX Infra ServicevBNG Service

  3. vBNG MUX Service

  4. vBRG Emulation

  5. vCPE Customer Service

This blog shows details of some of these services, and their associated model templates.

vCPE General Infra

This service consists of vDHCP, vAAA and vDNS VNF’s connected by 2 virtual links (VLs) – cpe_signal and cpe_public, both of which are Openstack Neutron networks. The cpe_public link is also connected to a Web Server.

Now, let us examine the Infra Service in SDC Catalog for its constituent components and their details.

The composition of this service is as follows, which shows the virtual links (VLs) and the VF that makes up all the VNF’s:

The CSAR file for this service contains the following details:

The service is modeled (in TOSCA and HEAT templates) as follows:

Notice that the 2 networks (CPE_PUBLIC and CPE_SIGNAL) are modeled in HEAT, and so is the VF module that contains VM’s for all the VNF’s (vDHCP, vAAA and vDNS + vDHCP). The TOSCA template includes node_templates for all the HEAT templates. The TOSCA model definition file for this service can be found here.

Let us take a closer look at the Environment file (base_vcpe_infra.env) of this service.

parameters:

  cloud_env: “openstack”

  cpe_public_net_cidr: “10.2.0.0/24”

  cpe_public_net_id: “zdfw1cpe01_public”

  cpe_public_subnet_id: “zdfw1cpe01_sub_public”

  cpe_signal_net_cidr: “10.4.0.0/24”

  cpe_signal_net_id: “zdfw1cpe01_private”

  cpe_signal_subnet_id: “zdfw1cpe01_sub_private”

  dcae_collector_ip: “10.0.4.1”

  dcae_collector_port: “8081”

  demo_artifacts_version: “1.2.0”

  install_script_version: “1.2.0-SNAPSHOT”

  key_name: “vaaa_key”

  mr_ip_addr: “10.0.11.1”

  onap_private_net_cidr: “10.0.0.0/16”

  onap_private_net_id: “ext-net”

  onap_private_subnet_id: “ext-net”

  pub_key: “ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDQXYJYYi3/OUZXUiCYWdtc7K0m5C0dJKVxPG0eI8EWZrEHYdfYe6WoTSDJCww+1qlBSpA5ac/Ba4Wn9vh+lR1vtUKkyIC/nrYb90ReUd385Glkgzrfh5HdR5y5S2cL/Frh86lAn9r6b3iWTJD8wBwXFyoe1S2nMTOIuG4RPNvfmyCTYVh8XTCCE8HPvh3xv2r4egawG1P4Q4UDwk+hDBXThY2KS8M5/8EMyxHV0ImpLbpYCTBA6KYDIRtqmgS6iKyy8v2D1aSY5mc9J0T5t9S2Gv+VZQNWQDDKNFnxqYaAo1uEoq/i1q63XC5AD3ckXb2VT6dp23BQMdDfbHyUWfJN”

  public_net_id: “2da53890-5b54-4d29-81f7-3185110636ed”

  repo_url_artifacts: “https://nexus.onap.org/content/groups/staging”

  repo_url_blob: “https://nexus.onap.org/content/sites/raw”

  vaaa_name_0: “zdcpe1cpe01aaa01”

  vaaa_private_ip_0: “10.4.0.4”

  vaaa_private_ip_1: “10.0.101.2”

  vcpe_flavor_name: “onap.medium”

  vcpe_image_name: “ubuntu-16.04-daily”

  vdhcp_name_0: “zdcpe1cpe01dhcp01”

  vdhcp_private_ip_0: “10.4.0.1”

  vdhcp_private_ip_1: “10.0.101.1”

  vdns_name_0: “zdcpe1cpe01dns01”

  vdns_private_ip_0: “10.2.0.1”

  vdns_private_ip_1: “10.0.101.3”

  vf_module_id: “vCPE_Intrastructure”

  vnf_id: “vCPE_Infrastructure_demo_app”

  vweb_name_0: “zdcpe1cpe01web01”

  vweb_private_ip_0: “10.2.0.10”

  vweb_private_ip_1: “10.0.101.40”

Note the details about the constituent VNF’s (vAAA, vDHCP, vDNS and vWEB_Server), including their IP addresses, and the network addresses of the VL’s that these VNF’s are connected to (cpe_signal and cpe_public). For eg., vDHCP & vAAA are connected to cpe_signal network (10.4.x.x), and vDNS and vWebServer are connected to cpe_public network (10.2.x.x). Also, DCAE Collector service is connected at 10.0.4.x IP address.

Now, let us look at some of the interesting fields of HEAT template (base_vcpe_infra.yaml) of this service. This contains details about all the VNF’s that are part of this service, and how they will be instantiated using HEAT. Complete copy of the HEAT template can be found here.

heat_template_version: 2013-05-23

description: Heat template to deploy vCPE Infrastructure elements (vAAA, vDHCP, vDNS_DHCP, webServer)

##############

#            #

# PARAMETERS #

#            #

##############

parameters:

  vcpe_image_name:

    type: string

    label: Image name or ID

    description: Image to be used for compute instance

    …

  cpe_signal_net_id:

    type: string

    label: vAAA private network name or ID

    description: Private network that connects vAAA with vDNSs

  …

  cpe_public_net_id:

    type: string

    label: vCPE Public network (emulates internet) name or ID

    description: Private network that connects vGW to emulated internet

  …

  vaaa_private_ip_0:

    type: string

    label: vAAA private IP address towards the CPE_SIGNAL private network

    description: Private IP address that is assigned to the vAAA to communicate with the vCPE components

  …

  vdns_private_ip_0:

    type: string

    label: vDNS private IP address towards the CPE_PUBLIC private network

  …

  vdhcp_private_ip_0:

    type: string

    label: vDHCP  private IP address towards the CPE_SIGNAL private network

    description: Private IP address that is assigned to the vDHCP to communicate with the vCPE components

  …

  vweb_private_ip_0:

    type: string

    label: vWEB private IP address towards the CPE_PUBLIC private network

    description: Private IP address that is assigned to the vWEB to communicate with the vGWs

  …

    …

  dcae_collector_ip:

    type: string

    label: DCAE collector IP address

    description: IP address of the DCAE collector

 …

#############

#           #

# RESOURCES #

#           #

#############

resources:

….

  # Virtual AAA server Instantiation

  vaaa_private_0_port:

    type: OS::Neutron::Port

    properties:

      network: { get_param: cpe_signal_net_id }

      fixed_ips: [{“subnet”: { get_param: cpe_signal_subnet_id }, “ip_address”: { get_param: vaaa_private_ip_0 }}]

  …

  vaaa_0:

    type: OS::Nova::Server

    properties:

     …

          template: |

            #!/bin/bash

            # Create configuration files

            mkdir /opt/config

            echo “__dcae_collector_ip__” > /opt/config/dcae_collector_ip.txt

            …

            # Download and run install script

            curl -k __repo_url_blob__/org.onap.demo/vnfs/vcpe/__install_script_version__/v_aaa_install.sh -o /opt/v_aaa_install.sh

            cd /opt

            chmod +x v_aaa_install.sh

            ./v_aaa_install.sh

Note the details about various VNF’s (vAAA, vDHCP, vDNS and vWebServer) and the VL’s (Neutron networks – cpe_signal which connects vAAA with vDNS VNF’s and cpe_public, which connects vGW service to Emulate Internet) that are part of the Infrastructure service. Also note the vAAA instantiation, and details of DCAE Collector IP address, and the installation script (v_aaa_install.sh) in vAAA VNF. Other VNFs (vDNS, vDHCP & vWebserver) have been left out but you can refer to the link above for these details in the HEAT template file.

In the next blog, we will examine other Services and their details.

In the meantime check out our latest webinar on “What’s new in ONAP Beijing” or request ONAP training if you/your team needs to learn more.

ONAP vFW Blueprint Across Two Regions

By Blog

This post originally appeared on Arana Networks. Republished with permission.

In the last blog we talked about how to use a public OpenStack cloud such as VEXXHOST as the NFVI/VIM layer for the ONAP vFW blueprint along with a containerized version of ONAP orchestrated by Kubernetes.

As we discussed, in reality, the traffic source and the vFW VNF are unlikely to be in the same cloud.  In this blog, we will briefly discuss how the vFW blueprint can span two different VEXXHOST tenants. This is not quite the same as two different cloud regions, but it is a pretty close simulation.

The two VNFs will be placed as follows:

  • vFW_PG (packet generator) on VEXXHOST Tenant1

  • vFW_SINC (compound VNF that consists of the vFW VNF and a sink VNF to receive packets) on VEXXHOST Tenant2

Since ONAP infrastructure is taken care of, here are the steps to connect ONAP to VEXXHOST. Please follow the steps from “Orchestrating Network Services Across Multiple OpenStack Regions Using ONAP” blog, to register both tenants as 2 regions in ONAP. Next:

  1. Create an account on VEXXHOST with 2 different tenants.

  2. If Registering the VEXXHOST into A&AI using ESR UI, change the password length to less than 20 characters.

  3. On Tenant1 manually create OAM network, unprotected_private  networks with different subnets than on Tenant2.

  4. On Tenant2, create an OAM network using the VEXXHOST cloud Horizon dashboard.

  5. Add security rules to allow ingress ICMP, SSH &all the required ports along with IPV6 from both the tenants.

  6. Edit the base_vfw.env and base_vpkg.env VNF descriptor files to configure them correctly based on the respective Tenants.

  7. Copy the above parameters into a text editor to use for subsequent A&AI registration, SDN-C preload, and APP-C⇔vFW_PG VNF netconf connection.

Now follow the steps from the vFW wiki that involve:

  1. SDC designer role: Create vendor license model

  2. SDC designer/tester role: Onboard and test VNFs (or vendor software product i.e. VSP)

  3. SDC designer role: Design network service

  4. SDC tester role: Test network service

  5. SDC governor role: Approve network service

  6. SDC ops role: Distribute network service

  7. VID: Instantiate network service

  8. VID: Add VNFs to network service

  9. SDN-C preload: Configure runtime parameters (for us design-time & run-time parameters are the same); preload vFW SINC on Tenant2 and vFW PG on Tenant1

  10. VID: Add VFs to network service — this step orchestrates networks and VNFs onto OpenStack

Upon completion of these steps, you should be able to go to the VEXXHOST Horizon GUI and see the VNFs coming up. Give them ~15 minutes to boot and another ~15 minutes to be fully configured. See below screenshots:

vFW Network Topology on Tenant2

vFW Network Topology on Tenant1

VNF SINC Stack Orchestrated on OpenStack Tenant2

VNF PG Stack Orchestrated on OpenStack Tenant1

Did you try this out? How did it go? We look forward to your feedback. In the meantime if you are looking for ONAP trainingprofessional services or development distros (basically an easy way to try out ONAP < 1 hour), please contact us.

Useful links: ONAP Wiki, vFWCL WikiOrchestrating Network Services Across Multiple OpenStack Regions Using ONAP

Debugging ONAP OOM Failures

By Blog

Originally published on Aarna Networks, republished with permission.

On May-21, Amar Kapadia & I conducted a webinar on the topic of “Debugging OOM Failures”.

We started off by giving some context. Our objective was to develop a lightweight, repeatable lab environment for ONAP training on Google Cloud. We also plan to offer this image to developers that need a sandbox environment. To accomplish this, we used ONAP Amsterdam along with OPNFV Euphrates. ONAP was installed using OOM that uses Kubernetes and Helm. All of this software was installed on one VM on the Google cloud.

For most users, issues that pop up once in a while are OK. However, for us, the deployment process needed to be consistent and repeatable. For this reason, we had to debug every intermittent failure and develop a single-click workaround script.

The webinar next talked about the 7 issues we faced, how we debugged them and what the workarounds were. The issues faced were as follows. Other than failure#7, the other failures were all intermittent:

  1. AAI containers failed to transition to Running state

  2. SDC UI is not getting loaded

  3. SDC Service Distribution Error

  4. VID Service Deployment Error

  5. VID ADD VNF Error

  6. SDNC User creation failed

  7. Robot init_robot failed with missing attributes

If you are curious to learn more, check out the slide deck or video links above. Additionally if you have ONAP training, PoC needs, or simply feel like trying out the VM image on GCP, feel free to contact us. We have a whole portfolio of training, services and product offerings.

Orchestrating Network Services Across Multiple OpenStack Regions Using ONAP (Part 2/2)

By Blog

Originally published on Aarna Networks, republished with permission.

In the previous installment of this two part blog series, we looked at why NFV clouds are likely to be highly distributed and why the management and orchestration software stack needs to support these numerous clouds. ONAP is one such network automation software stack. We saw the first three steps of what it takes to register multiple OpenStack cloud regions in

ONAP for the vFW use-case (other use cases might need slight tweaking).

Let’s pick up where we left off and look at the remaining steps 4-7:

Step 4: Associate Cloud Region object(s) with a subscriber’s service subscription
With this association, this cloud region will be populated into the dropdown list of available regions for deploying VNF/VF-Modules from VID.

Example script to associate the cloud region  “CloudOwner/Region1x” with subscriber “Demonstration2” that subscribes to service “vFWCL”:

curl -X PUT \  https://<AAI_VM1_IP>:8443/aai/v11/business/customers/customer/Demonstration2/service-subscriptions/service-subscription/vFWCL/relationship-list/relationship \

 -H ‘accept: application/json’ \

 -H ‘authorization: Basic QUFJOkFBSQ==’ \

 -H ‘cache-control: no-cache’ \

 -H ‘content-type: application/json’ \

 -H ‘postman-token: 4e6e55b9-4d84-6429-67c4-8c96144d1c52’ \

 -H ‘real-time: true’ \

 -H ‘x-fromappid: jimmy-postman’ \

 -H ‘x-transactionid: 9999’ \

-d ‘ {

   “related-to”: “tenant”,

   “related-link”: “/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x/tenants/tenant/<Project ID>”,

   “relationship-data”: [

       {

           “relationship-key”: “cloud-region.cloud-owner”,

           “relationship-value”: “CloudOwner”

       },

       {

           “relationship-key”: “cloud-region.cloud-region-id”,

           “relationship-value”: “<Cloud Region – should match with physical infra>”

       },

       {

           “relationship-key”: “tenant.tenant-id”,

           “relationship-value”: “<Project ID>”

       }

   ],

   “related-to-property”: [

       {

           “property-key”: “tenant.tenant-name”,

           “property-value”: “<OpenStack User Name>”

       }

   ]

}’

Step 5: Add Availability Zones to AAI
Now we need to add an availability zone to the region we created in step 3.

Example script to add OpenStack availability zone name, e.g ‘nova’ to Region1x:

curl -X PUT \

https://<AAI_VM1_IP>:8443/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x/availability-zones/availability-zone/<OpenStack ZoneName> \

-H ‘accept: application/json’ \

-H ‘authorization: Basic QUFJOkFBSQ==’ \

-H ‘cache-control: no-cache’ \

-H ‘content-type: application/json’ \

-H ‘postman-token: 4e6e55b9-4d84-6429-67c4-8c96144d1c52’ \

-H ‘real-time: true’ \

-H ‘x-fromappid: AAI’ \

-H ‘x-transactionid: 9999’ \

-d ‘{

   “availability-zone-name”: “<OpenStack ZoneName>”,

   “hypervisor-type”: “<Hypervisor>”,

   “operational-status”: “Active”

}’

Step 6:  Register VIM/Cloud instance with SO
SO does not utilize the cloud region representation from A&AI. It stores information of the VIM/Cloud instances inside the container (in the case of OOM) as a configuration file. To add a VIM/Cloud instance to SO, log into the SO service container and then update the configuration file “/etc/mso/config.d/cloud_config.json” as needed.

Example steps to add VIM/cloud instance info to SO:

# Procedure for mso_pass(encrypted)

# Go to the below directory on the kubernetes server

/<shared nfs folder>/onap/mso/mso

# Then run:

$ MSO_ENCRYPTION_KEY=$(cat encryption.key)

$ echo -n “your password in cleartext” | openssl aes-128-ecb -e -K MSO_ENCRYPTION_KEY -nosalt | xxd -c 256 –p

# Need to take the output and put it against the mso_pass

# value in the json file below. Template for adding a new cloud

# site and the associate identity service

$ sudo docker exec -it <mso container id> bash

root@mso:/# vi /etc/mso/config.d/mso_config.json

“mso-po-adapter-config”:

   {

     “identity_services”:

     [

       {

         “dcp_clli1x”: “DEFAULT_KEYSTONE_Region1x”,

         “identity_url”: “<keystone auth URL https://<IP or Name>>/v2.0”,

         “mso_id”: “<OpenStack User Name>”,

         “mso_pass”: “<created above>”,

         “admin_tenant”: “service”,

         “member_role”: “admin”,

         “tenant_metadata”: “true”,

         “identity_server_type”: “KEYSTONE”,

         “identity_authentication_type”: “USERNAME_PASSWORD”

       },

“cloud_sites”:

     [

       {

         “id”: “Region1x”,

         “aic_version”: “2.5”,

         “lcp_clli”: “Region1x”,

         “region_id”: “<OpenStack Region>”,

         “identity_service_id”: “DEFAULT_KEYSTONE_Region1x”

       },

# Save the changes and Restart MSO container

# Check the new config

http://<mso-vm-ip>:8080/networks/rest/cloud/showConfig # Note output below should match parameters used in the CURL Commands

# Sample output:

Cloud Sites:

CloudSite: id=Region11, regionId=RegionOne, identityServiceId=DEFAULT_KEYSTONE_Region11, aic_version=2.5, clli=Region11

CloudSite: id=Region12, regionId=RegionOne, identityServiceId=DEFAULT_KEYSTONE_Region12, aic_version=2.5, clli=Region12

Cloud Identity Services:

Cloud Identity Service: id=DEFAULT_KEYSTONE_Region11, identityUrl=<URLv2.0, msoId=<OpenStackUserName1>, adminTenant=service, memberRole=admin, tenantMetadata=true, identityServerType=KEYSTONE, identityAuthenticationType=USERNAME_PASSWORD

Cloud Identity Service: id=DEFAULT_KEYSTONE_Regopm12, identityUrl=https://auth.vexxhost.net/v2.0, msoId=<OpenStackUserName2>, adminTenant=service, memberRole=admin, tenantMetadata=true, identityServerType=KEYSTONE, identityAuthenticationType=USERNAME_PASSWORD

Step 7: Change Robot service to operate with the VIM/Cloud instance
When using OOM, by default the Robot service supports the pre-populated cloud region where the cloud-owner is “CloudOwner” and cloud-region-id is specified via the parameters “openstack_region” during the deployment of the ONAP instance through OOM configuration files. This cloud region information can be updated in the file “/share/config/vm_properties.py” inside the robot container. Appropriate relationships between Cloud Regions and Services need to be setup in the same file for Robot Service Tests to pass.

Note:  Robot framework does not rely on Multi-VIM/ESR.

If you have done all 7 steps correctly, Robot tests should pass and both regions should appear in the VID GUI.

If you liked (or disliked) this blog, we’d love to hear from you. Please let us know. Also if you are looking for ONAP trainingprofessional services or development distros (basically an easy way to try out ONAP on Google Cloud in <1 hour), please contact us. Professional services include ONAP deployment, network service design/deployment, VNF onboarding, custom training etc.

References:

ONAP Wiki

vFWCL Wiki

Orchestrating Network Services Across Multiple OpenStack Regions Using ONAP (Part 1/2)

By Blog

Originally published on Aarna Networks, republished with permission.

NFV clouds are going to be distributed by their very nature. VNFs and applications will be distributed as per the below figure: horizontally across edge (access), regional datacenter (core) and hyperscale datacenters (could be public clouds) or vertically across multiple regional or hyperscale datacenters.

Distributed NFV Clouds

The Linux Foundation Open Network Automation Platform (ONAP) project is a management and orchestration software stack that automates network/SDN service deployment, lifecycle management and service assurance. For the above-mentioned reasons, ONAP is designed to support multiple cloud regions from the ground up.

In this two-part blog, we will walk you through the exact steps to register multiple cloud regions with ONAP for the virtual firewall (vFW) use-case that primarily utilizes SDC, SO, A&AI, VID and APP-C projects (other use cases will be similar but might require slightly different instructions). Try it out and let us know how it goes.

Prerequisites
  1. ONAP Installation (Amsterdam release)

  2. OpenStack regions spread across different physical locations

  3. Valid Subscriber already created under ONAP (e.g Demonstration2)

If you do not have the above, and still want to try this out, here are some alternatives:

ONAP Region Registration Steps

There are 3 locations where VIM/cloud instance information is stored: A&AI, SO & Robot. The following 7 steps outline the steps and provide sample API calls.

Step 1: Create Complex object(s) in AAI

A complex object in A&AI represent the physical location of a VIM/Cloud instance. Create a complex object for each OpenStack Region that needs to be configured under ONAP

Example script to do create complex object named clli1x:

# Main items to be changed are highlighted, but most of the below

# information should be customized for your region

curl -X PUT \ https://<AAI_VM1_IP>:8443/aai/v11/cloud-infrastructure/complexes/complex/clli1x \

-H ‘X-TransactionId: 9999’ \

-H ‘X-FromAppId: jimmy-postman’ \

-H ‘Real-Time: true’ \

-H ‘Authorization: Basic QUFJOkFBSQ==’ \

-H ‘Content-Type: application/json’ \

-H ‘Accept: application/json’ \

-H ‘Cache-Control: no-cache’ \

-H ‘Postman-Token: 734b5a2e-2a89-1cd3-596d-d69904bcda0a’ \

  -d   ‘{

           “physical-location-id”: “clli1x”,

           “data-center-code”: “example-data-center-code-val-6667”,

           “complex-name”: “clli1x”,

           “identity-url”: “example-identity-url-val-28399”,

           “physical-location-type”: “example-physical-location-type-val-28399”,

           “street1”: “example-street1-val-28399”,

           “street2”: “example-street2-val-28399”,

           “city”: “example-city-val-28399”,

           “state”: “example-state-val-28399”,

           “postal-code”: “example-postal-code-val-28399”,

           “country”: “example-country-val-28399”,

           “region”: “example-region-val-28399”,

           “latitude”: “example-latitude-val-28399”,

           “longitude”: “example-longitude-val-28399”,

           “elevation”: “example-elevation-val-28399”,

           “lata”: “example-lata-val-28399”

       }’

Step 2: Create Cloud Region object(s) in A&AI

The VIM/Cloud instance is represented as a cloud region object in A&AI and ESR. This representation will be used by VID, APP-C, VFC, DCAE, MultiVIM, etc. Create a cloud region object for each OpenStack Region.

Example script to create cloud region object for the same cloud region:

curl -X PUT \

‘https://<AAI_VM1_IP>:8443/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x \

 -H ‘accept: application/json’ \

 -H ‘authorization: Basic QUFJOkFBSQ==’ \

 -H ‘cache-control: no-cache’ \

 -H ‘content-type: application/json’ \

 -H ‘postman-token: f7c57ec5-ac01-7672-2014-d8dfad883cea’ \

 -H ‘real-time: true’ \

 -H ‘x-fromappid: jimmy-postman’ \

 -H ‘x-transactionid: 9999’ \

 -d ‘{

   “cloud-owner”: “CloudOwner”,

   “cloud-region-id”: “Region1x”,

   “cloud-type”: “openstack”,

   “owner-defined-type”: “t1”,

   “cloud-region-version”: “<OpenStack Version>”,

   “cloud-zone”: “<OpenStack Cloud Zone>”,

   “complex-name”: “clli1x”,

   “identity-url”: “<keystone auth URL https://<IP or Name>/v3>”,

   “sriov-automation”: false,

   “cloud-extra-info”: “”,

   “tenants”: {

       “tenant”: [

           {

               “tenant-id”: “<OpenStack Project ID>”,

               “tenant-name”: “<OpenStack Project Name>”

           }

       ]

   },

   “esr-system-info-list”:

   {

       “esr-system-info”:

       [

           {

               “esr-system-info-id”: “<Unique uuid, e.g. 432ac032-e996-41f2-84ed-9c7a1766eb29>”,

               “service-url”: “<keystone auth URL https://<IP or Name>/v3>”,

               “user-name”: “<OpenStack User Name>”,

               “password”: “<OpenStack Password>”,

               “system-type”: “VIM”,

               “ssl-cacert”: “”,

               “ssl-insecure”: true,

               “cloud-domain”: “Default”,

               “default-tenant”: “<Project Name>”

           }

       ]

   }

}’

Step 3: Associate each Cloud Region object with corresponding Complex Object
This needs to be setup for each cloud region with the corresponding complex object.

Example script to create the association:

curl -X PUT \ https://<AAI_VM1_IP>:8443/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x/relationship-list/relationship \

 -H ‘accept: application/json’ \

 -H ‘authorization: Basic QUFJOkFBSQ==’ \

 -H ‘cache-control: no-cache’ \

 -H ‘content-type: application/json’ \

 -H ‘postman-token: e68fd260-5cac-0570-9b48-c69c512b034f’ \

 -H ‘real-time: true’ \

 -H ‘x-fromappid: jimmy-postman’ \

 -H ‘x-transactionid: 9999’ \

 -d ‘{

   “related-to”: “complex”,

   “related-link”: “/aai/v11/cloud-infrastructure/complexes/complex/clli1x”,

   “relationship-data”: [{

           “relationship-key”: “complex.physical-location-id”,

           “relationship-value”: “clli1x”

   }]

}’

We will cover the remaining 4 steps in the next and final installment of this blog series.

In the meantime if you are looking for ONAP training, professional services or development distros (basically an easy way to try out ONAP < 1 hour), please contact us.

How service providers can use Kubernetes to scale NFV transformation

By Blog

This post originally appeared on LinkedIn. Republished with permission by Jason Hunt, Distinguished Engineer of IBM.

After attending two major industry events—IBM’s Think and the Linux Foundation’s Open Networking Summit (ONS)—I’ve been thinking about how software and networking are evolving and merging in a way that can really benefit service providers.

It’s been interesting to watch how NFV has changed over the past few years. At first, NFV dealt simply with virtualization of physical network elements. Then as network services grew from simple VNFs to more complex combinations of VNFs, ONAP came along to provide lifecycle management of those network functions. Now, with 5G on the doorstep, service providers will need to shift the way they approach NFV deployments yet again.

Why? As Verizon’s CEO Lowell McAdam told IBM’s CEO Ginni Rometty at IBM Think, 5G will deliver 1GB throughput to devices with 1ms of latency, while allowing service providers to connect 1,000 times more devices to every cell site. In order to support that, service providers need to deploy network functions at the edge, close to where those devices are located.

But accomplishing that kind of scale can’t be done manually. It has to be done through automation at every level. And for that, service providers can leverage the kind of enterprise-level container management that’s possible with Kubernetes. Kubernetes allows service providers to provision, manage, and scale applications across a cluster. It also allows them to abstract away the infrastructure resources needed by applications. In ONAP’s experience, running on top of Kubernetes, rather than virtual machines, can reduce installation time from hours or weeks to just 20 minutes.

At the same time, service providers are utilizing a hybrid mixture of public and private clouds to run their network workloads. However, many providers at ONS expressed frustration at the incompatibility across clouds’ infrastructure provisioning APIs. This lack of harmonization is hampering their ability to deploy and scale NFV when and where needed.

Again, Kubernetes can help service providers meet this challenge. Since Kubernetes is supported across nearly all clouds, it can expose a common way to deploy workloads. Arpit Joshipura, GM Networking at the Linux Foundation, demonstrated this harmonization on the ONS keynote stage. With help from the Cloud-CI project in the Cloud Native Computing Foundation (CNCF), Arpit showed ONAP being deployed across public and private clouds (including IBM Cloud) and bare metal. Talk about multi-cloud!

Last October, IBM announced IBM Cloud Private, an integrated environment that enables you to design, develop, deploy and manage on-premises, containerized cloud applications behind your firewall. IBM Cloud Private includes Kubernetes, a private image repository, a management console and monitoring frameworks. We’ve documented how ONAP can be deployed on IBM Cloud Private, giving service providers a supported option for Kubernetes in an on-premises cloud.

At ONS, AT&T’s CTO Andre Fuetsch stated, “Software is the future of our network.” With 5G getting closer to the mainstream every day, the best-prepared service providers will look at how to combine the best of the software and network worlds together. Exploring the benefits of a Kubernetes-based environment might just be the best answer for their NFV deployment plans.

From Portal to SDC: Inside the ONAP Architecture

By Blog

See below for a quick overview of ONAP informational videos from Architecture sub-committee members Manoop Talasilla and Michael Lando.

The ONAP platform is made of several software subsystems and two major architectural frameworks – a design-time environment to design, define and program the platform; and an execution-time environment to execute the logic programmed in the design phase. Whether new to the platform or well-versed, understanding the ONAP architecture is critical to deployment, and our latest video series is here to help.

To kick off the series, we’ll focus on the ONAP Portal and Service Design and Creation (SDC). The videos feature two key members of the ONAP Architecture Team:  Manoop Talasilla, Portal Technical Lead at AT&T Research Labs, and Michael Lando, Service Design Technical Lead of AT&T. In our video series, Manoop covers the ONAP portal, and Michael the Service Design and Creation (SDC).

Video 1: ONAP Portal

Manoop takes a beginners look at the ONAP portal, focused on the platform and its ability to integrate different applications into a centralized portal core. Additional capabilities of the portal include application onboarding and management, decentralized access management, and hosed application features, as detailed in the video.

Want to learn more about the ONAP portal and network operations? Dive in: watch Manoop’s full video now. 

 

Video 2: Service Design and Creation (SDC)

SDC, and Integrated Development Environment (IDE), is a subsystem of the design-time framework, accessible through the ONAP portal. In the video, Michael discusses that as an IDE, SDC provides the tools for designing services as well as creating the necessary artifacts for service orchestration. With its graphical interface and visual tools, users can drag and drop different components onto the SDC canvas to model their service, see what’s connected where, what the capabilities are and the requirements each VNF provides to the service.

As the design time component, SDC handles all design time activities. Check out the full video below to hear Michael’s explanation of SDC in ONAP.  

Interested in learning more about the ONAP Architecture? Take a look at the full video series here and read the Architecture Whitepaper.

2017 ONAP Community Awards Shine Spotlight on Collaboration

By Blog

As we reflect upon 2017 and the successful launch of Amsterdam, we are proud to announce the winners of the inaugural ONAP Community Awards acknowledging individual and community contributions to the success of the project. We were gratified to see strong participation, with 87 nominations representing 53 individuals and projects, and 571 total votes cast.

The community recognized the winners on December 11at the ONAP Developer Forum in Santa Clara, CA. Details about each award category and its winner appear below. Please join us in congratulating all of our nominees and winners!

Top Achievement Award: Catherine Lefevre, AT&T

The community recognized Catherine Lefevre for her dedication to the project and her pivotal role in the successful merger of multiple code bases and timely delivery of the Amsterdam release. Catherine worked tirelessly across many groups and companies to evangelize ONAP globally, while working closely with the Technical Steering Committee (TSC) to work toward Amsterdam’s release date.

Citizenship Award: Chris Donley, Huawei

Chris Donley provided the most assistance to others outside of his own ONAP project through code reviews, debugging, bug fixes and more, furthering collaboration across the large, distributed ONAP community. His work on the Architecture Committee and TSC and time spent educating and guiding others set the standard for communication across the team.

Marketing Award: Lingli Deng, China Mobile

Lingli Deng provided significant support to the ecosystem teams and championed ONAP across a variety of mediums. She also spoke on behalf of ONAP at events around the world and led the review team for the project’s VoLTE whitepaper. Additionally, Lingli contributed two technical videos in English, three in Chinese, and is a frequent coordinator of Chinese contributions to ecosystem development activities.

Code Contribution Award: Seshu Kumar, Huawei

PTLs, Committers and Contributors selected Seshu Kumar to receive the Code Contribution Award based on the quantity of quality of his code. He played important role in helping the Service Orchestrator (SO) project reach critical milestones and in resolving blocking issues. Seshu is one of the top code contributors to ONAP overall.

Project Achievement Award: The Integration Team

The Integration Team worked together for the first time on Amsterdam, yet they met the tight release deadline.

Innovation Award: The ONAP Operations Manager (OOM) Project Team

The OOM Project Team deployed ONAP on containers to support the Amsterdam release.

Top Predictions for Networking in 2018

By Blog

Arpit Joshipura, GM of Networking and Orchestration at the Linux Foundation, shares his 2018 predictions for the networking industry.

1. 2015’s buzzwords are 2018’s course curriculum.
SDN, NFV, VNF, Containers, Microservices — the hype crested in 2016 and receded in 2017. But don’t mistake quiet for inactivity; solution providers and users alike have been hard at work with re-architecting and maturing solutions for key networking challenges. And now that these projects are nearing production, these topics are our most requested areas for training.

2. Open Source networking is crossing the chasm – from POCs to Production.
The ability for users and developers to work side by side in open source has helped projects mature quickly — and vendors to rapidly deliver highly relevant solutions to their customers. For example:

3. Top networking vendors are embracing a shift in their business models…

  • Hardware-centric to software-centric: value-add from rapid customization
  • Proprietary development to open-source, shared development
  • Co-development with end users, reducing time to deployment from 2 years to 6 months

4. Industry-wide adoption of 1-2 Network Automation platforms will enable unprecedented mass customization.
The need to integrate multiple platforms, taking into account each of their unique feature sets and limitations, has traditionally been a massive barrier to rapid service delivery.

In 2018, mature abstractions and standardizing processes will enable user organizations to rapidly on-board and orchestrate a diverse set of best-of-breed VNFs and PNFs at need.

5. Advances in cloud and carrier networking are driving skills and purchasing shifts in the enterprise.
The ease and ubiquity of public cloud for simple workloads has reset end user expectations for Enterprise IT. The carrier space has driven maturity of open networking solutions and processes. Enterprise IT departments are now at a crossroads:

  • How many and which of their workloads and processes do they want to outsource?
  • How can they effectively support those workloads remaining in-house with the same ease and speed users expect?
  • What skills will IT staff need, and how will they get them?

Which brings us to…

6. Prediction #1 will also lead off our Predictions list for 2019.