Online Proceedings

Thursday 10, Sept. 2020

iPOP Plenary
Thursday 10, Sept. 2020, 10:00-12:00
Presider: Hiroaki Harai, NICT, Japan
Opening Address
Naoaki Yamanaka, General Co-Chair, Keio University, Japan
Bijan Jabbari, General Co-Chair, ISOCORE, USA
Keynote
K-1 "Scalable high-capacity optical transport technologies for Innovative Optical and Wireless Network (IOWN)"
Yutaka Miyamoto, NTT, Japan

Yutaka Miyamoto

This keynote introduces ultrahigh-speed digital coherent transport at the channel rate over 1 Tbit/s in today's single mode fiber (SMF). Advance in space-division multiplexing optical transport technologies is also described for the mitigation of future "Capacity Crunch" in long-haul transport network based on today's SMF.


Biography:

Yutaka Miyamoto received a B.E. and M.E. in electrical engineering from Waseda University, Tokyo, in 1986 and 1988 respectively, and a Dr. Eng. from the University of Tokyo. He joined NTT in 1988, since then he has been engaged in R&D of high-capacity optical communications systems. He is NTT Fellow and director of Innovative Photonic Network Research Center in NTT Network Innovation Laboratories, where he has been investigating and promoting scalable optical transport networks with Pbit/s-class capacity based on innovative optical transport technologies , such as digital signal processing, space division multiplexing, and cutting-edge integrated devices for photonic pre-processing. He is a member of the Institute of Electrical and Electronics Engineers (IEEE) and a Fellow of the Institute of Electronics, Information and Communication Engineers (IEICE).



K-2 "An operator's path towards open packet and optical transport Networks."
Óscar Gonzalez des Dios, Telefonica, Spain

Oscar Gonzalez des Dios

This keynote presents the path that a major telco operator is taking to evolve the transport network to achieve 5G and beyond 5G requirements. Recent events has proven and demanded increased needs of connectivity and higher capacity, in support of new B5G services, traffic spikes and new patterns situations. There are 2 main pillars which operators are building to face the new challenges. The first one is a "softwarization" of the network, enable a high degree of programmability, allowing to use the full potential of the deployed infrastructure, automate and add innovations. The second one is openness, both in terms of interfaces, getting rid of the vendor specific views moving towards a technology view, and in devices, by building on top of open designs and decoupling hardware and software. Operators are joining initiatives such as TIP (Telecom Infra Project) in which the open ecosystem is fostered, covering the gaps by building open devices in which hardware and software can be interchanged. In this regards, NTT and Telefonica are co-leading the CANDI initiative, where different use cases for open and disaggregated transport networks are demonstrated. This path is not easy one, and is full of bumps and new challenges, which as joint industry can be dealt easier and faster.


Biography:

Óscar González de Dios received his M.S. degree in telecommunications engineering and Ph.D. degree (Hons.) from the University of Valladolid, Spain. He has 20 years of experience in Telefonica, where he has been involved in a number of European research and development projects (recently, STRONGEST, ONE, IDEALIST, and Metro-Haul). He has coauthored over 100 research papers and 10 IETF RFCs. He is currently the head of SDN Deployments for Transport Networks, called iFUSION in Telefonica Global CTIO. His main research interests include photonic networks, flexi-grid, interdomain routing, PCE, automatic network configuration, end-to-end MPLS, performance of transport protocols, and SDN. He is currently active in several IETF Working Groups such as OPSAWG and TEAS and is the Co-Chair of CANDI WG in OOPT Telecom Infra Project.



iPOP Exhibition introduction
- iPOP Exhibition Co-Chair
Technical Session
Tech. Session (1): Network Design and Optimization
Thursday 10, Sept. 2020, 13:00-14:40
Chair: Yohei Hasegawa, NEC, Japan
T1-1 "Defragmentation for 1+1 protected elastic optical networks: A route partitioning approach"
Bijoy Chand Chatterjee, South Asian University, India, and Eiji Oki, Kyoto University, Japan

Bijoy Chand Chatterjee

In survival elastic optical networks (EONs), network operators prefer the 1+1 protection technique as it provides instantaneous recovery and support reliability against multiple link failures. However, suppressing spectrum fragmentation to enhance spectrum utilization is always challenging in 1+1 protected EONs [1, 2].

Several studies [2-4] have been conducted to enhance spectrum utilization in 1+1 protected or survival EONs. They do not evaluate the effect in protected EONs, where both full 1+1 protected and quasi 1+1 protected lightpaths exist. The same route partitioning scheme introduced in [3] cannot be applied for protected EONs as the nature of both lightpaths are different; quasi lightpaths allow reallocation whereas full 1+1 lightpaths do not. The presence of both lightpaths involves different network operations, which affects the defragmentation performance. Directly applying the route partitioning to all lightpaths may lead to an unbalanced partitioning, where the overall interferences are minimized but the interferences to full 1+1 lightpaths are not. Quasi 1+1 lightpaths, which are defragmented using the path exchanging scheme, are not affected by the interferences as they are directly reallocated while in the backup state [4].

This paper presents and investigates defragmentation based on route partitioning intending to reduce the spectrum fragmentation and blocking probability in 1+1 protected EONs, where both full 1+1 protected and quasi 1+1 protected services exist [5]. The defragmentation scheme consists of two phases. In the first phase, it involves the route partitioning optimization problem for minimizing the retuning interference on the full 1+1 protected lightpaths, and two partitions of lightpath requests are formed. One partition of lightpath requests is allocated using the first fit and other partition of lightpath requests is allocated using the last fit. In the second phase, the defragmentation scheme handles the operation problem, where quasi 1+1 lightpaths and full~1+1 lightpaths are deframgented using the path exchanging scheme [4] and push-pull retuning [3], respectively.
We explain lightpath interferences and how the full and quasi 1+1 lightpaths work during defragmentation with an example. For this purpose, we consider a spectrum condition before triggering the defragmentation process (see Fig. 1(a)). Plain boxes and hashed boxes represent primary and backup lightpaths, respectively. Lightpaths with labels Qia, Qib, Fia, and Fib represent quasi primary, quasi backup, full primary, and full backup lightpaths, respectively, where i is the index of lightpaths. In Fig. 1(a), lightpath F1b cannot be retuned to slot 5 due to the interference introduced by lightpath Q1a; only push-pull retuning can be applied to lightpath F1b as it demands full 1+1 protection. However, after performing reallocation operations on quasi 1+1 lightpaths Q1a and Q1b, lightpath F1b can be retuned to slot 5. In this case, the intermediate steps are follows. (i) Quasi 1+1 lightpaths Q1a and Q1b exchange the path functions; the quasi primary lightpath becomes the backup state, while the quasi backup becomes the primary (see Fig. 1 (b)). (ii) Quasi backup path Q1b is reallocated from slot 6 to slot 1. (iv) Finally, full lightpath F1b is retuned to slot 5 using push-pull retuning (see Fig. 1 (c)).



T1-1_Fig1

Fig.1 Demonstration of lightpath interference
(a) spectrum condition before defragmentation and (b) intermediate spectrum condition during defragmentation, and (c) spectrum condition after defragmentation.


T1-1_Fig2

Fig.2 Blocking performance using different approaches.


We conclude that this paper introduced defragmentation based on route partitioning to reduce the blocking probability for 1+1 path protected EONs, where lightpath can be quasi 1+1 or full 1+1 protected. Figure 2 observes that defragmentation with partitioning improves the blocking performance, and hence the admissible traffic in 1+1 protected networks is increased by avoiding path interferences.


Reference:

  1. M. Jinno, H. Takara, B. Kozicki, Y. Tsukishima, Y. Sone, and S. Matsuoka, “Spectrum-efficient and scalable elastic optical path network: Architecture, benefits, and enabling technologies,” IEEE Commun. Mag., vol. 47, no. 11, pp. 66–73, Nov. 2009.
  2. C. Wang, G. Shen, B. Chen, and L. Peng, “Protection path-based hitless spectrum defragmentation in elastic optical networks: Shared backup path protection,” in Proc. OFC, Los Angeles, CA, USA, 2015, pp. 1–3.
  3. S. Ba, B.C. Chatterjee, S. Okamoto, N. Yamanaka, A. Fumagalli, and E. Oki, “Route partitioning scheme for elastic optical networks with hitless defragmentation,” IEEE/OSA Journal of Optical Communications and Networking, vol. 8, no. 6, pp. 356-370, Jun. 2016.
  4. S. Ba, B.C. Chatterjee, and E. Oki, “Defragmentation scheme based on exchanging primary and backup paths in 1+1 path protected elastic optical networks,” IEEE/ACM Trans. Networking, vol. 25, no. 3, pp. 1717-1731, Jun. 2017.
  5. B.C. Chatterjee and E. Oki, “Defragmentation based on route partitioning in 1+1 protected elastic optical networks,” Computer Networks, 2020. [to appear]



Biography:

Bijoy Chand Chatterjee is an Assistant Professor and DST Inspire Faculty at South Asian University (SAU), New Delhi, India, and an Adjunct Professor at the Indraprastha Institute of Information Technology Delhi (IIITD), New Delhi, India. Before joining SAU, he was with IIITD, Norwegian University of Science and Technology, Trondheim, Norway, and The University of Electro-Communications, Tokyo, Japan. His research interests include optical networks, QoS-aware protocols, optimization, and routing. He is a senior member of IEEE.



T1-2 "Optimization Model for Virtualized Network Graph Design and Embedding"
Takehiro Sato, Kyoto University, Japan, Takashi Kurimoto, Shigeo Urushidani, National Institute of Informatics, Japan, and Eiji Oki, Kyoto University, Japan

Takehiro Sato

Developments in network virtualization and automated network management technologies enable network operators to provide their customers with virtualized networks (VNs) in on-demand manner. A VN is provisioned by deploying virtual routers (VRs) and establishing virtual links (VLs) between VRs on a substrate infrastructure. Each of customer’s data centers (DCs) connects to one of the VRs and exchanges data with other DCs. The provisioning cost of VN depends on the number of VRs, connections between VRs provided by VLs, mapping of VRs and VLs on a substrate infrastructure, and routing of data traffic exchanged between DCs.

We propose an optimization model for virtualized network graph design and embedding (VNDE). Different from existing virtual network embedding (VNE) models [1], in which given virtual network graphs are mapped to a substrate network graph, the VNDE model determines the number of VRs and a virtual network graph for each VN request based on given traffic demands between every source-destination DC pair. The VNDE model also determines the access paths between DCs and VRs. The objective function is to minimize the cost required for provisioning all VN requests.

We demonstrate the VN provisioning by using 15-node 44-link Atlanta network. We set the utilization costs of transit VR, VL, and access VR, as non-decreasing step functions as shown in Tables I, II, and III, respectively. We define the access cost as the expense to set up an access path from each DC to an access VR. The access cost is set as a function that increases in proportion to the traffic amount and the number of hops from a DC to an access VR. The capacities of each node and each substrate link are set to 1000 Gbps and 100 Gbps, respectively. Two VNs, each of which has three DCs, are requested. The cost-coefficient parameter of access cost per link, !, and the traffic demand of each source-destination DC pair per VN, d, are varied in this demonstration.

Figure 1 shows the resultant VN configurations obtained by the VNDE model. In Fig. 1(b), the traffic of VN 1 from DC 0 to DC 2 and that from DC 1 to DC 2 are aggregated at a VR in node 5, which leads to saving the VL utilization cost. From Figs. 1(a) and 1(c), it can be observed that, as the access cost increases, more VRs are placed on the network so that the total length of access paths becomes shorter.

T1-2_Table123


T1-2_Fig1

Fig.1 Demonstration of VN graph design and embedding.



Acknowledgement:
This work was supported in part by ROIS NII Open Collaborative Research 20S0104, and JSPS KAKENHI Grant Numbers 18H03230 and 19K14980.


References:

  1. A. Fischer, J. F. Botero, M. T. Beck, H. de Meer, and X. Hesselbach, “Virtual network embedding: A survey,” IEEE Communications Surveys & Tutorials, vol. 15, no. 4, pp. 1888–1906, 2013.



Biography:

Takehiro Sato received the B.E., M.E. and Ph.D. degrees in engineering from Keio University, Japan, in 2010, 2011 and 2016, respectively. He is currently an assistant professor in Graduate School of Informatics, Kyoto University, Japan. From 2011 to 2012, he was a research assistant in the Keio University Global COE Program by Ministry of Education, Culture, Sports, Science and Technology, Japan. From 2012 to 2015, he was a research fellow of Japan Society for the Promotion of Science. From 2016 to 2017, he was a research associate in Graduate School of Science and Technology, Keio University, Japan. He is a member of IEEE and IEICE.



T1-3 "PoC construction of Expected Capacity Guaranteed Routing (ECGR) based on k-shortest path for Various Networks"
Masahiro Matsuno, Masaki Murakami, Yoshihiko Uematsu, Satoru Okamoto, and Naoaki Yamanaka, Keio University, Japan

Masahiro Matsuno

The multi-vendor optical transmission equipment that composes the access and metro network is rapidly progressing. This will lead to increasing in the expected system and network failure rate because of the difficulty of the system performance tuning. Also, Internet traffic is growing more rapidly and a backbone network with high transmission capacity is required. Therefore, a high-availability routing which overcomes the high failure rate is required. The improvement of the machine performance and the development of the data analysis technologies enables the failure prediction of a network system based on analyzing the collected data from network equipment [1]. Therefore, the survivable routing method considering the failure probability is studied. We have proposed a high-availability routing method called Expected Capacity Guaranteed Routing (ECGR) [2] which guarantees that the expected capacity calculated based on link failure probability exceeds the requested capacity of flows. In [2], a mixed integer linear programming (MILP) method is applied for multi-path route calculation. Therefore, the conventional ECGR has a problem that the computation time increases explosively as the number of nodes and links increases. To reduce the computation time, a k-shortest path routing method that calculates k routes in order from the shortest route is applied. ECGR route calculation is performed by setting the upper limit of allocation per route to the requested capacity and increasing the number of routes until the expected capacity constraint is obtained. Blocking occurs when the expected capacity constraint cannot be satisfied even if k is used. In order to confirm that the calculation can be completed within 60 seconds with k-shortest path based ECGR even if the number of nodes increases or any topology, a PoC was created. There are two hosts and they are connected with each other through an ECGR enabled network. In the PoC, the ECGR enabled network is composed of Open vSwitches connected to each other. A controller of the PoC consists of three controllers and one manager. An ECGR controller is a main controller of the PoC and it calculates the optimal route for the connection request from the sender host. It also manages available network resources, e.g. link capacity, and also manages Virtual LAN Identifier (VLAN ID) for multi-path transmission. After the ECGR controller calculating the route, it requests to an OpenFlow controller for configuring the paths between the sender host and the receiver host. Every switch is configured using an OpenFlow protocol. A multi-path frame converter manager supervises multi-path frame converters which are placed on the edge of ECGR enabled network. It receives the requested capacity and VLAN ID for assigned paths from the ECGR controller and configures the multi-path frame converters at the both ends of the route. An environmental controller is the element only for the PoC. It manages the state of each link and emulates the link failure according to the failure rate functions. In this presentation, we propose k-shortest path-based ECGR and describe operation verification of ECGR in a testbed and performance evaluation in computer simulation.

T1-3_Fig1

Fig.1 ECGR PoC system.



Acknowledgement:
This work is also partly supported by the R&D of innovative optical network technologies for supporting new social infrastructure project (JMPI00316) funded by the Ministry of Internal Affairs and Communications Japan


References:

  1. W. Ji, S. Duan, R. Chen, S. Wang and Q. Ling, "A CNN-based network failure prediction method with logs," 2018 Chinese Control And Decision Conference (CCDC), June 2018
  2. S. Sekigawa, et al., "Expected Capacity Guaranteed Routing based on Dynamic Link Failure Prediction,” International Conference on Computing, Networking and Communications (ICNC 2019), pp. 1-5, February 2019.



Biography:

Masahiro Matsuno received his B.E. degree from Keio University in 2019. He is currently a master course student in Graduate School of Science and Technology, Keio University.



T1-4 "Minimizing the Cost of Translucent Space Division Multiplexing Elastic Optical Networks"
Filippos Balasis, Noboru Yoshikane, Takehiro Tsuritani, KDDI Research, Inc., Japan


Filippos Balasis

Flex-grid elastic optical network (EON) and multi-core fiber based space division multiplexing (SDM) have been the major research topics in optical networking in the last decade. The first one is expected to add flexibility in the bandwidth and spectrum allocation whereas the second one will provide the required boost in the core networks’ capacity. The combination of these two in one network architecture, known as SDM EON, is expected to replace the legacy optical network infrastructure in the future. Even though there is an ongoing and increasing research output on the planning of an SDM EON, there is very little to non-existent related work on translucent SDM networks where the optical signal can be regenerated in one or more intermediate nodes along the route. The use of regeneration can make long haul transmission possible, especially for higher order modulation schemes that have significantly shorter transmission reach. Moreover, regeneration in an SDM EON might be even more necessary due to the presence of inter-core crosstalk between lightpaths that occupy the same spectrum and propagate through adjacent cores of a multi-core fiber link. In addition, during the regeneration process, spectrum and modulation conversion can be applied and thus, a more efficient usage of the network’s resources can be achieved.

This work focuses on the planning of translucent multi-core fiber SDM EONs with the objective of minimizing the network’s total cost that mostly depends on the number of optical transponders, used for transmission and regeneration, and the type of the deployed fiber cables across the network’s links. This work’s goal also is to examine the impact of regeneration on the network’s cost and power consumption. For example, is it better to use higher order modulation schemes like 8QAM and 16QAM that require more regenerators but less spectrum or is it more cost-effective to use BPSK and QPSK schemes with more expensive fiber cables that have a higher core count? The results are obtained through integer linear programming (ILP) and in order to reduce the complexity of the model, some pre-computation techniques are used. The pre-computation of the possible routes between two nodes in a given topology and the construction of an auxiliary graph that determines the optimal modulation schemes and regeneration nodes are some of the techniques that are used in this work. Results have shown that the use of regeneration can lead to a more cost-effective network architecture.



T1-4_Fig1

Fig.1 In a translucent SDM EON, besides regeneration, spectrum and modulation conversion is also possible.





Biography:

Filippos Balasis received his B.E. degree from National Technical University of Athens, Greece, in 2009, and M.E. and Ph.D. degrees from Waseda University, Japan, in 2013 and 2019 respectively. From 2010 to 2011 he worked in OTE, the largest telecommunications company in Greece, where he was member of a project team responsible for the expansion and upgrade of OTE’s access network. Since January of 2020 he has been working as a post-doc researcher in KDDI Research.



Tech. Session (2): Advanced Network Design
Thursday 10, Sept. 2020, 14:50-16:30
Chair: Eiji Oki, Kyoto University, Japan
T2-1 "Study of MEC architecture expansion for latency-aware application allocation"
Tetsu Joh, Takayuki Warabino, Yusuke Suzuki, Tomohiro Otani, KDDI Corporation/KDDI Research, Inc.,Japan

Tetsu Joh

1. Introduction
Multi-access Edge Computing (MEC) is considered as a key technology for 5G services, especially for those requiring low-latency response. In a MEC system, some cloud-computing capabilities (MEC hosts) are located at the edge of mobile network, where is close proximity to User Equipment (UE). Data delivery delay between UE and application instance (shortly application) can be reduced by allocating the application to the MEC host located near the UE. However, since resources of the MEC host is limited, computation delay of application processing is not negligible especially when the MEC host is under heavy load. Then, it is necessary to allocate the application in consideration of both delivery delay and computation delay. For example, Michal Vondra et al, proposed selection algorism of MEC host considering both delivery and computation delay for each MEC host in cluster [1]. The MEC host is selected to minimize approximated overall delay which is expressed as the sum of estimated delivery delay and estimated computation delay.
On the other hand, European Telecommunications Standards Institute (ETSI) defines a global standard of MEC reference architecture [2]. However, methodology of the MEC host selection according to delay requirement of services is not discussed.
In this paper, we (1) summarize functional requirements of the MEC system required for the application allocation and (2) study expansion of the MEC reference architecture based on the summarized requirements.

2. Methodology of delay estimation
In this section, we introduce the methodology for estimating delivery and computation delay. The delivery delay between the UE and the application can be estimated by transmitted data size and estimated throughput of transmission path. With reference to [1], smaller one between estimated throughput of radio and backhaul network is used as the estimated throughput. The radio throughput can be estimated by information of Modulation and Coding Scheme (MCS) of each UE, the number of UE attached to a RAN node and the number of Resource Blocks (RBs) handled by the RAN node. The backhaul throughput can be estimated by the information of maximum and used bandwidth. Meanwhile the computation delay can be estimated by computing power required for application processing, maximum and used computing power of each MEC host.

3. Functional requirements and expansion of MEC reference architecture
TABLE I summarizes functional requirements to allocate the applications in the MEC system so that the service delay requirement can be satisfied. Fig. I shows the expanded MEC architecture studied based on TABLE I. The expanded architecture is briefly explained.
For requirement No. 1 (shortly Req. #1), information of specifications of the applications and the MEC hosts should be managed by Multi-access Edge Orchestrator (MEO) since the MEO is responsible for on-boarding of application packages and maintaining an overall view of the MEC system. We define Internal Database (ID) as storing feature which stores the information in the MEO.
To meet Req. #2, we expand UE Request Handler (URH) in the MEO which handles application lifecycle management requests from UE and expand Mx2 and Mm9 reference points. The URH forwards the UE's requests to MEC host Backhaul network RAN Information Collector (MBRIC), along with UE information collected via the expanded Mx2 and Mm9.
To meet Req. #3 and #4, we expand the MBRIC and reference point Mm4 to monitor resources of the MEC hosts, and expand Radio Network Information Service (RNIS) [3] which provides low-level radio and network information. Additionally, we define new reference point Mp4 for the MEO to send requests to the RNIS. Receiving forwarded UE requests from the URH, the MBRIC collects resource information of the MEC hosts and the backhaul network via Mm4, and collects RAN information from the RNIS via Mp4. The RAN node to which the UE is attached is identified by the UE information. The MBRIC forwards the UE requests with the collected information toward Application Allocation Selector (AAS).
To meet Req. #5 to #9, we expand the AAS which selects one or some appropriate MEC host(s) for each application. The AAS estimates delivery and computation delay based on the information stored in the ID and the information forwarded by the MBRIC, and selects the MEC host(s) for the application. After selection, the AAS requests Application Allocation Handler (AAH) for actual application allocation.
To meet Req. #10, we define the AAH which allocates the application to the MEC host(s) according to requests from the AAS via Mm3.
As future work, we plan to develop a prototype system of the expanded architecture and evaluate efficiency of our architecture.


Table.1 Required function for MEC system

T2-1_Table1

T2-1_fig1

Fig.1 Expanded MEC reference architecture


Acknowledgement:
This work was conducted as part of the project entitled "Research and development for innovative AI network integrated infrastructure technologies (JPMI00316) " supported by the Ministry of Internal Affairs and Communications, Japan.


References:

  1. Michal Vondra, et al, “QoS-ensuring Distribution of Computation Load among Cloud-enabled Small Cells,” CloudNet, Oct. 2014.
  2. ETSI GS MEC 003, “Multi-access Edge Computing (MEC); Framework and Reference Architecture,” Jan. 2019.
  3. ETSI GS MEC 012, “Multi-access Edge Computing (MEC); Radio Network Information API,” Dec. 2019.



Biography:

2010, B.E., Factory of Science and Engineering, Waseda Univercity
2012, M.E., Factory of Science and Engineering, Waseda Univercity
2012, Engaged in maintenance of roaming service at Global Network Operation Center, KDDI Corporation
2013, Engaged in development of VoLTE system at Department of Mobile Core Network Technical Development, KDDI Corporation
2018, Engaged in research and development of network management and control at Network Operation Automation Laboratory, KDDI Research, Inc



T2-2 "Demonstration of Service Function Chain Allocation with Network Service Header"
Rui Kang, Fujun He, Takehiro Sato, and Eiji Oki, Kyoto University, Japan

Rui Kang

Network function virtualization (NFV) has been introduced to decouple the hardware from the functions in order to reduce the service deployment and operation costs. NFV enables virtual network functions (VNFs) that provide more flexible services to users and lower the costs of service providers. In order to take advantage of the flexibility of VNFs, a virtual network function allocation model is needed to decide the allocations of VNFs. For example, we introduced a virtual network function allocation model to maximize continuous available time of service function chains in [1].

To evaluate the performance of the allocation model, we often need to deploy functions with real network devices, which is costly and time-consuming. It requires powerful computation capability to use the existing simulation tools in some use cases. We implement the service function chain allocation application based on network service header (NSH), which is connected with the allocation model. The application acquires the computation result from the allocation model and allocates the functions to corresponding locations automatically. In the demonstration, we implement the application on Ryu [2]. We implement the functions of classifier, service function forwarder, and service function chaining (SFC) proxy [3] on switches by the modification of flow tables which is conducted by the application. The network devices are simulated in Mininet [4]. It receives the registration message from allocated VNFs and instructs the actions of switches.

Fig. 1 overviews the structure of the reported application. There are six elements in the demonstration with six types of communications. We set three tables in the database, which are named flow, service, and vnf. Table flow stores the information of user-defined services. Table service stores the information of each SFC. Table vnf stores the information of each VNF.

We conduct a test of the demonstration, whose connections between devices are shown in Fig. 2. We run the allocation model on node N1 simulated by Mininet. We observe that the VNFs register themselves to database successfully as shown in Fig. 3. We use the tracepath command in Mininet to see the path of this flow. We observe that the data is successfully encapsulated by NSH as shown in Fig. 4. We observe that the path of a two-function chain is correctly configured from the result shown in Fig. 5(a) compared with the old path shown in Fig. 5(b).


T2-2_Fig1

Fig.1 Overall structure.


T2-2_Fig2

Fig.2 Device connection diagram of demonstration.


T2-2_Fig3

Fig.3 Register information in database.


T2-2_Fig4

Fig.4 Ethernet frame is encapsulated by NSH and another Ethernet frame.


T2-2_Fig5

Fig.5 Result of tracepath.


Acknowledgement:
This work was supported in part by JSPS KAKENHI, Japan, under Grant Number 18H03230.


References:

  1. R. Kang, F. He, T. Sato, and E. Oki, “Virtual network function allocation to maximize continuous available time of service function chains,” in 2019 IEEE 8th International Conference on Cloud Networking (CloudNet), Nov 2019, pp. 1–6.
  2. Ryu SDN Framework Community, “Ryu SDN framework,” https://osrg.github.io/ryu/index.html, accessed Jan. 21, 2020.
  3. J. Halpern and C. Pignataro, “Service function chaining (SFC) architecture,” RFC 7665, Oct. 2015.
  4. Mininet Team, “Mininet: An instant virtual network on your laptop (or other pc),” mininet.org/, accessed Jan. 21, 2020.



Biography:

Rui Kang is currently pursuing the M.E. degree at Kyoto University, Kyoto, Japan. He received the B.E. degree from University of Electronic Science and Technology of China, Chengdu, China, in 2018. He was an exchange student in The University of Electro-Communications, Tokyo, Japan, from 2017 to 2018. His research interests include virtual network resource allocation, network virtualization, and software-defined network.



T2-3 "Low Earth Orbit Satellite Network Architecture with Optical Inter Satellite Links"
Cen Wang, KDDI Research Inc., Japan, Yong Zhu, Beijing University of Posts and Telecommunications, China, Noboru Yoshikane, and Takehiro Tsuritani, KDDI Research Inc., Japan


Cen Wang

Since satellite miniaturizations come true in recent years, using a large amount of small/micro satellites to build a global network is cost-controllable. These satellites will occupy the low earth orbits (LEOs), so we usually call them the LEOs. The global satellite network is then called the LEO network. There are three types of the LEO networks. The first one is without inter-satellite links (ISLs). A LEO is used as a relay between two far-away ground stations. The second type is with an incomplete ISL deployment, namely, the ISLs are only built in part of all the LEO pairs. The last one is with a complete ISL deployment, a LEO has an ISL to any other LEO of its neighbors, as shown in Fig. 1(a). Using the ISLs, less ground stations are needed to realize global wide communications. A few telemetry stations and one network controller are enough to manage the LEOs and the network. Thus, the ISL deployments will further lower the costs. The first type and the second type can be regarded as the intermediate state during constructing a LEO network with complete ISLs. Recently, the Starlink has launched another 60 LEOs [1], and the constellation now has 480 LEOs in total. The 480 LEOs now are with no ISLs, so they may work as the first type.

The ISL can be either optical or electrical. The optical ISL has several advantages [2] over radio frequency (RF) links: 1) optical ISL has the ability to send high speed data to a distance of thousands of kilometers using small size payload; 2) By reducing the size of the payload, the mass and the cost of the satellite will also be decreased; 3) RF wavelengths are much longer than optical wavelengths; hence the beam width that can be achieved using lasers is narrower than that of the RF system, which results in lower loss compared to RF; 4) the optical ISL has higher security than the RF links.

We will focus on the LEO network with completely optical ISL deployment [3]. In this network type, combining with the electrical packet switching based space router, we add a 4×4 optical switching matrix in the satellite, in this way, data could be transparently forwarded through a LEO node without O-E-O conversions. As shown in Fig. 1(b), the transparent ports and the nontransparent ports are both optical. As depicted in Fig. 1(c), if a nontransparent port is used, data could only be forwarded to the space router on a neighbor LEO via the aligned nontransparent port. In the logical network topology, the nontransparent links are the basic edge between any LEO node pair. Otherwise, the data is sent through a transparent port of the source LEO. The data will be transmitted to the aligned transparent port on a neighbor LEO, and finally to the target LEO within feasible reach. In other words, a direct light path is established between the source and the target, which can be regarded as a direct edge reconnected between the LEO node pair in the logical view. Basic edges plus reconnected edges are the same with the construction process of the small world network. Thus, these two forwarding modes enable the LEO network to be with a logically small world like topology (i.e., either deterministic or dynamic). Such kind of topology has been verified to be with good performance (i.e. higher throughput and lower latency) due to shorter average network distance [4]. In this work, we have done the following progresses:

  • The architecture of the proposed LEO network with two forwarding modes;
  • The SDN control mechanism of the proposed LEO network;
  • The topology reconstruction scheme of the LEO network and the topology performance under different patterns of traffic requests.


T2-3_Fig1

Fig.1 (a) The LEO network with SDN control; (b) the node structure; (c) the logical topology construction.


References:

  1. https://www.space.com/spacex-starlink-internet-satellites-launch-success-june-2020.html
  2. Sharma, Vishal, and Naresh Kumar. "Improved analysis of 2.5 Gbps-inter-satellite link (ISL) in inter-satellite optical-wireless communication (IsOWC) system," in Optics Communications vol. 286, pp: 99-102, 2013.
  3. N. Karafolas and S. Baroni, "Optical satellite networks," in Journal of Lightwave Technology, vol. 18, no. 12, pp. 1792-1806, Dec 2000, doi: 10.1109/50.908734.
  4. D. Zhang, H. Guo, J. Wu and X. Hong, "A deterministic small-world topology based optical switching network architecture for data centers," ECOC2014, Cannes, pp. 1-3, doi: 10.1109/ECOC.2014.6963987.




Biography:

Cen WANG, received his Ph. D degree on electrical engineering from Beijing University of Posts and Telecommunications in 2019, now is with KDDI Research Inc. as an associated researcher. His research interests span in network modeling, AI + networking and networking for AI applications. The current number of his publications is over 40.



T2-4 "Hierarchical Skew Handling over Massively Parallel Optical Channel for the Dynamic MAC"
Kyosuke Sugiura, Masaki Murakami, Satoru Okamoto, and Naoaki Yamanaka, Keio University, Japan

Kyosuke Sugiura

It is expected that the required transmission capacity for optical fiber will hit 1 Pb/s around 2030; whereas single-mode optical fiber (SMF) has physical limitations around 100Tb/s. To overcome this limitation, multi-core/multi-mode fibers have been developed [1]. Also, the ever-increasing traffic implies that a 10 Tb/s-class interface will be needed around 2030, which will be achieved by aggregating 400 optical channels. To this end, we propose a Dynamic MAC (Media Access Control), which effectively maps MAC client signals up to 400 optical channels [2].

The main challenge is how to effectively distribute MAC client signals up to 400 lanes. 100 Gb/s Ethernet employs a round-robin mapper that distributes MAC signals to 20 of PCS lanes on a 64-bit block basis, but it is a bottleneck in expanding PCS (Physical Coding Sublayer) lanes. Therefore, the Dynamic MAC introduces an intermediate layer just before PCS and gradually divide the MAC frame into a 64-bit block. In the emulator evaluation in [3], the number of parallelisms can be increased x1.73 by introducing the intermediate layer.

The Dynamic MAC has another significant challenge; how to handle skew. Skew is defined as the term between the earliest lane and the latest one. The Dynamic MAC may transmit signals over different core, fiber, routes. In the case of 100 Gb/s Ethernet, MLD (Multi-Lane Distribution) mechanism triggers lane swapping due to its bit multiplexing strategy. Therefore, 100 Gb/s Ethernet periodically insert AM (Alignment Marker) into each lane for skew handling and reordering lane. The Dynamic MAC inserts two types of AM; the intermediate layer inserts AM_outer and PCS inserts AM_inner. This enables local skew processing in the intermediate layer.

In this research, we show an example of a lane configuration that is considered optimal from the aspect of both scalability and skew processing. Besides, the details of signal generation are described.

T2-4_Fig1

Fig.1 Dynamic MAC over Multi-core Fiber Optical Network

T2-4_Fig2

Fig.2 Internal Lane Structure of Dynamic MAC

Acknowledgment: This work is partly supported by the “Massively Parallel and Sliced Optical Network (MAPLE)” project funded by the National Institute of Information and Communications Technology (NICT), Japan.


Reference:

  1. K. Igarashi, D. Soma, Y. Wakayama, K. Takeshima, Y. Kawaguchi, N. Yoshikane, T. Tsuritani, I. Morita, and M. Suzuki, “Ultra-dense spatial-division-multiplexed optical fiber transmission over 6-mode 19-core fibers,” Optics Express, vol.24, no.10, pp. 10213-10231, 2016.
  2. K. Sugiura, M. Murakami, S. Okamoto, and N. Yamanaka, “Architecture of dynamic MAC using bit-by-bit mapping on massively parallel optical channels,'' 15th International Conference on IP+Optical Network (iPOP 2019), No. T3-4, May 2019.
  3. K. Sugiura, M. Murakami, S. Okamoto, and N. Yamanaka, “Implementing Hierarchical Round-Robin Mapper Emulator for Realizing the Dynamic MAC,” IEICE Technical Report on Photonic Network, Vol. 119, No. 290, PN2019-31, pp. 49-54, Nov. 2019.



Biography:

Kyosuke Sugiura received his B.E. degree from Keio University in 2019. He is currently a master course student in Graduate School of Science and Technology, Keio University.

Gold Sponsor Session(1)
Thursday 10, Sept. 2020, 16:30-17:10
Chair: Hirofumi Yamaji, TOYO Corporation, Japan
G-1 "gRPC Enabled Massively Parallel and Flexible Network"
Yutaka Nasu, Keio University, Japan

Yutaka Nasu

Massively parallel transmission and flexible operations are essential toward 1Pbps class optical networks with Space Division Multiplexing (SDM). We propose the Dynamic MAC for SDM enabled networks. The Dynamic MAC allows massively parallel transmission ov er 400 optical channels and skews handling caused by different wavelengths, fibers, and paths. The Dynamic MAC can be controlled via gRPC, which is popular as a control interface among network equipment. gRPC makes it possible to operate MAC and other netw ork devices together by the same interface. This demo shows flexible traffic engineering that combines not only routing but also link bandwidth control by the Dynamic MAC.




Biography:

Yutaka Nasu received his B.E. degree from Keio University in 2020. He is currently a master course student in Graduate School of Science and Technology, Keio University.





G-2 "OpenShift 5G edge computing
          Geo oriented Open 5G in the 5G era and beyond"
Hidetsugu Sugiyama, Red Hat K.K(Japan), Japan

Hidetsugu Sugiyama

In 5G CNF era, micro-service will not only be accelerated, but distributed Kubernetes computing also will be so. Together with Service Provider, we are now working on Local 5G edge test project for OpenShift 5G CNF infrastracture deployment that builds 5G RAN, 5G Core along with CI/CD environment of each industry player's biz edge application like eSports and SmartX use case. We are using new OpenShift base P4 switch fabric that Kubernetes Master and Kubernetes Worker embedded into white-box switch, in addition to COTS server. 5G UPF network slicing can be run in the OpenShift P4 switch while enterprise container applications and 5G control plane functions run in OpenShift COTS server in the same cluster.
In this session, we will further elaborate the distributed OpenShift architecture design for Local 5G(user-oriented private 5G edge computing) managed locally by Kubernetes Operators instead of expensive remote management from Telco.




Biography:

Hidetsugu Sugiyama is Chief Architect at Red Hat and focuses on 5G projects. Hidetsugu(Hyde) has been with Red Hat for seven years, working on SDN/NFV/Edge Computing solutions development and joint GTM with Telco R&D partners. He has 30+ years experience in the Information and Communications Technology industry. Prior to Red Hat, he worked at Juniper Networks as a Director of R&D Support driving JUNOS SDK software development ecosystems and IP Optical collaboration development in Japan and APAC for ten years. Prior to Juniper, he worked at Service Providers including Sprint and UUNET in team leading.



Poster Session
Thursday 10, Sept. 2020, 17:10-18:10
P-1 "First-Large Fit Spectrum Allocation for Elastic Optical Network with Spectrum Slicing"
Kaito Akaki,and Nattapong Kitsuwan, The University of Electro-Communications, Japan

Kaito Akaki

An elastic optical network (EON) is an approach to efficiently utilize the channel space by dynamically allocating spectrum based on each incoming request. EON provides a spectrum slot that is divided more detail than spectrum space of dense wavelength-division multiplexing (DWDM). EON provides consecutive slots for the request so that a huge data rate traffic can be accommodated and the network capacity is increased. There are two rules for spectrum allocation in EON. First, the same spectrum slots of a request must be assigned from a source to a destination, if spectrum conversion is not considered. Second, the spectrum slots for each request on each link must be consecutive.

A slicing and stitching technology, which breaks the second rule, has been invented to relax the consecutive constraint in EON. This technology split a spectrum band into several optical components by making a copy of the original spectrum band and filtering out an unwanted signal on each spectrum band. The remained optical components are injected into the transmission channel. At the destination, the optical components are recovered by using phase preserving wavelength conversion. A spectrum allocation scheme is needed to determine the location of the spectrum slots and the position of the splitting point.

A conventional spectrum allocation scheme consists of two allocation processes, which are logical assignment and physical assignment. The logical assignment increases the number of splitting for the request until all split components can be assigned. The physical assignment determines the required number of slicers. The conventional scheme has two problems. First, sometimes the blocking occurs in the first allocation process although the number of slicers is remaining. Second, unnecessary slicers are needed. The proposed spectrum allocation scheme considers two cases. The first case allocates the request without splitting the original optical band. In this case, the smallest available slots area is selected to allocate the request. The second case allocates the request with slicers. The largest available area is selected to allocate a partial original optical band. The algorithm repeats until the request is allocated. Each partial original optical component is split by a slicer.

Performance of the proposed scheme is evaluated by a computer simulation. The result shows that the proposed scheme outperforms the conventional scheme in terms of request blocking probability. The proposed scheme outperforms the conventional scheme. The proposed scheme with 20 slicers reduces 42% of RBP compared to conventional scheme when the traffic is 500 Erl.


P-1_Fig1

Fig.1 Example of slot allocation in proposed scheme.


P-1_Fig2

Fig.2 Request blocking probability




Biography:

Kaito Akaki received the B.E. degree in Information Science and Engineering from The University of Electro-Communications, Tokyo, Japan, in 2020. He currently pursuing a master's degree at the Department of Computer and Network Engineering, The University of Electro-Communications, Tokyo, Japan. His research interest includes an elastic optical network.



P-2 "Automatic Camera Selection and Broadcast Method Based on Dynamic Optical Path among Edge Computers"
Yu Nishio, Masaki Murakami, Satoru Okamoto, and Naoaki Yamanaka, Keio University, Japan

Yu Nishio

The number of videos has been increasing and is expected to continue to do so in the future. According to the Cisco Visual Networking Index (VNI)[1], it is predicted that 82% of IP traffic will be video by the year 2022. In particular, we have seen a remarkable increase in traffic for live streams. According to VNI, only 5% of video traffic was live streamed in 2017. However, they report that this is expected to increase to 17% by 2022. With multiple large streams of 4K and 8K video and other high-capacity streams on the network, there is a need for a dynamic optical path for video delivery. There are also three problems with real-time streaming video compared to recorded video. The first is that the user's desired video changes as the content changes over time. Secondly, it is not possible to deal with unintended reflections of the object distributor. Finally, there is a dearth of ways to distinguish between similar videos. So I propose the automatic camera selection and broadcast method based on dynamic optical path among edge computers that illustrated in Fig 1.

In this method, each camera that is recording video first sends the video to the video management edge server. Users who want to watch the videos send their preferences to the video distribution edge server in advance. Next, the video management edge server uses machine learning to tag each frame of the video as shown in Fig 2. In this case, the tag weight is set according to the size of the object in the frame. Then, using the user's preferences and the video's tags, it sets the video's score. Based on the scores of the videos, the user decides which videos to watch. And the video distribution edge server between the video management edge server to which the video belongs and the video distribution edge server to which the user belongs The dynamic optical path is used to distribute video. In this case, the video distribution edge server holds several videos that are high scoring for the user at the same time.

This method solves three problems with camera selection in three key ways. First, we created metadata for each frame of the video. Next, we used machine learning to metadata the objects in the frame. Finally, the weight of the metadata was determined according to the percentage of objects in the frame. This approach allows the user to watch the video according to his or her preferences.

P-2_Fig1

Fig.1 Automatic Camera Selection and Broadcast Method Based on Dynamic Optical Path among Edge Computers


P-2_Fig2

Fig.2 Camera selection using tags


References:

  1. Cisco,“Cisco Visual Networking Index:Global Mobile Data Traffic Forecast Update 2017–2022 White Paper”, https://www.cisco.com/c/ja_jp/solutions/collateral/service-provider/visual-networking-index-vni/white-paper-c11-741490.html ,2020/06/02



Biography:

Yu Nishio received his B.E. degree from Keio University in 2019. He is currently a master course student in Graduate School of Science and Technology, Keio University.



P-3 "Proposal to predict location information for unconnected vehicle and control autonomous driving vehicle at intersections using optical network"
Ryosuke Shirai, Satoru Okamoto and Naoaki Yamanaka, Keio University, Japan

Ryosuke Shirai

A smart city that utilizes cyber-physical space (CPS) to solve urban problems is collecting much attention. CPS is a system that uploads data via network gathered in the real world to the cyberspace and feeds it back to the real world according to the analysis results. To realize CPS, a cloud server and many edge servers are connected by optical network.

An example of utilizing CPS in a smart city is an autonomous driving vehicle (ADV). In ADV platform, information gathered from connected cars and roadside units (RSUs) is sent to cyberspace. Then, ADV can perform an appropriate operation according to the information of the surrounding environment because ADV is controlled based on the calculation result in cyberspace. In addition, because of linked edge servers by optical network, it is possible to control ADV that deals with various QoS requirement.

Since collision accidents frequently occur at intersections, much research has been conducted on ADV control in consideration of safety [1]. However, ADV is controlled only based on their position information in these studies, so there is a possibility of a collision accident with an unconnected vehicle at an intersection.

To solve this problem, in this paper, we propose a method to predict location information for unconnected vehicles from a camera in the smart city and a method of ADV control at intersections where unconnected vehicles exist by linked edge servers. In the proposed method, an edge server controls ADV at each intersection. This edge computing is effective in terms of delay and network load. In addition, ADV is controlled by CPS that is constructed by linked edge servers, so edge servers are connected by optical network.

However, a single camera cannot predict that the unconnected vehicle will slow down due to obstacles that are not visible in the camera. As a result, ADV wastefully waits for an unconnected vehicle at the intersection, which reduces intersection throughput. Therefore, multiple cameras in the smart city are linked by an edge server to predict unconnected vehicles. (fig.1)

Also, in the method of ADV control, we propose a control based on the probability density to the positions of unconnected vehicles. (fig.2) By adding acceleration control to the vehicle control method, this method not only makes it possible to avoid collision accidents but also prevents a reduction in traffic throughput.

In this paper, a bicycle is used as a model of the unconnected vehicle. It is possible to avoid collision accidents while maintaining a high traffic throughput resulting from the computer simulation.


P-3_Fig1

Fig.1 Linked multiple cameras in the smart city


P-3_Fig2

Fig.2 The method of ADV control


References:

  1. J. Rios-Torres and A. A. Malikopoulos, "A Survey on the Coordination of Connected and Automated Vehicles at Intersections and Merging at Highway On-Ramps," in IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 5, pp. 1066-1077, May 2017, doi: 10.1109/TITS.2016.2600504.



Biography:

Ryosuke Shirai received his B.E. degree from Keio University in 2020. He is currently a master course student in Graduate School of Science and Technology, Keio University.



P-4 "Path Establishment Methods considering Fairness due to Path Length Difference in Distributed Control Elastic Optical Networks"
Sora Yoshiyama, and Ken-ichi Baba, Kogakuin University, Japan

Sora Yoshiyama

1. Introduction
In recent years, EON (Elastic Optical Networks) which can utilize bandwidth efficiently and flexibly have attracted attention. In previous research, we proposed the path establishment method executing RMSA (Routing, Modulation Level and Spectrum Allocation) with distributed control. In EON, long-distance paths tend to be rejected more than short-distance paths. Because EON has the adjacency constraint and the continuity constraint of spectrum utilization, so the long-distance paths need more network resources such as frequency spectra than the short-distance paths, and the long-distance paths are harder to achieve such resources than the short-distance paths due to two constraints.

2. Our Proposed Methods
In this study, we propose path establishment methods that determine the suitable route from multiple candidate routes considering the fairness due to the path length difference in distributed control EON. So that, we propose three methods (Fig.1). In the first method, the destination node chooses the route which has the smallest number of hops among the candidate routes. In the second method, the destination node chooses the route which has the least average link utilization. By using not much utilized link, the long-distance path may achieve resources. In the third method, the destination node chooses the route which has the smallest total degree of passing nodes. Paths tend to concentrate on nodes with many degree, so it chooses smaller degree nodes.

3. Performance Evaluation
We evaluate the proposed methods by simulations. We use JPN12 topology as the network model. The number of slots is 320 per link. Path setup requests follow Poisson distribution, path duration (1/µ) follows exponential distribution with average 25 seconds. The transmission capacity of the path is uniformly distributed in the range of 1Gbps to 50Gbps. The maximum number of path messages received by the destination node is M = 3. Figure 2 shows the request blocking probability for the entire network. Figure 3 shows the request blocking probability for each hop number when the arrival rate λ is 70, and the horizontal broken line shows the overall request blocking probability. In Fig. 2, the probabilities of the method 1 and 3 are similar. When the arrival rate of path requests λ is 70, the method 2 which has best value is 83.7% better than the conventional method. In Fig. 3, the method 1 and 3 has smaller differences due to distance than the conventional method. In the method 2, the difference due to distance is not so developed, but the best performance can be achieved.

4. Conclusions
In this study, we show the performance of the proposed path establishment methods considering the fairness due to path length difference in distributed control EON.


P-4_Fig1

Fig.1 Proposed methods


P-4_Fig2

Fig.2 Request blocking probability


P-4_Fig3

Fig.3 Request blocking probability by hops




Biography:

Sora Yoshiyama received his B.E. degree from Kogakuin University in 2019.  He is currently a master course student in Graduate School of Engineering, Electrical Engineering and Electronics Program, Kogauin University.



P-5 "A Study on TCP Fairness between TCP BBR and CUBIC TCP"
Kanon Sasaki, Saneyasu Yamaguchi, Kogakuin University, Japan

Kanon Sasaki

I. INTRODUCTION
It was revealed that TCP BBR could not accurately estimate the bottleneck bandwidth with increasing RTT (roundtrip time) and showed that this caused the unfairness between TCP BBR and CUBIC TCP [1]. In the work, the authors focused on a significant and unpractical increase of RTT for discussion in which the RTT increased to 0 ms to 1000 ms. In this paper, focus on a practical situation wherein the RTT does not increase too rapidly.

II. RELATED WORK
We showed that the performance fairness TCP BBR and CUBIC TCP was severely low [2] and revealed that the estimated bottleneck link bandwidth of TCP BBR remarkably decreased by focusing on an unpractical experimental situation its RTT significantly increased. However, the behavior of the estimated bottleneck bandwidth in a practical situation has not been discussed.

III. TCP BBR
Cardwell et al. proposed TCP BBR [3]. Unlike popular TCP algorithms, such as CUBIC TCP, this was not a lossbased algorithm. It estimates the bandwidth and the delay during communication and then set its congestion windows size to the BDP, bandwidth-delay product.

IV. EVALUATION OF BBR’S BANDWIDTH ESTIMATION
In this section, we explore the behavior of the bottleneck bandwidth estimation of TCP BBR in a situation wherein its RTT is increasing. We constructed a testbed network shown in Fig. 1. In the Queue PC emulated the network delay and increased its delay during experiments. The bandwidth of the link between the Queue PC and the Receiver PC was shaped to 500 Mbps. The other bandwidths were 1 Gbps. The emulated delay time increased from 50 ms to 1050 ms by 0.1ms step in 70 sec. That is, the RTT's increasing speed was 14 ms/sec. Fig. 2 to 5 depict the experimental results. The RTT started increasing 15 seconds after the beginning of the measurements. Fig. 2 and 3 indicate that the throughput and the congestion window size, respectively, started decreasing at 20 sec, which is 5 sec after the start of RTT increase, and kept decreasing until the next DRAIN at 25 sec. Fig. 4 implies that the decrease in the congestion window size was caused by the decrease of the BtlBw. Fig. 5 shows that the TCP BBR increased its BtlBw with its pacing mechanism. However, the positive effect, i.e. increasing speed, by this was less than the negative effect by the RTT increase. Then, the TCP BBR could not create a temporal queue after 18 sec and delivered decreased after this, around 20 sec.
V. CONCLUSION
In this paper, we focused on the TCP fairness between TCP BBR and CUBIC TCP and discussed its cause. We then showed that the increase in RTT decreases the TCP BBR's congestion window size.


P-5_Fig1

Fig.1 Experimental network


P-5_Fig2 P-5_Fig3

Fig.2 Throughput

Fig.3 Congestion window size


P-5_Fig4 P-5_Fig5

Fig.4 BtlBw, RTprop

Fig.5 deliverd, interval


Acknowledgment:
This work was supported by JSPS KAKENHI Grant Numbers 17K00109, 18K11277 and JST CREST Grant Number JPMJCR1503, Japan


References:

  1. K. Sasaki and S. Yamaguchi, "A Study on Bottleneck Bandwidth Estimation Based on Acknowledge Reception on TCP BBR," 2020 IEEE 44th Annual Computer Software and Applications Conference (COMPSAC), 2020.
  2. K. Sasaki, M. Hanai, K. Miyazawa, A. Kobayashi, N. Oda and S. Yamaguchi, "TCP Fairness Among Modern TCP Congestion Control Algorithms Including TCP BBR," 2018 IEEE 7th International Conference on Cloud Networking (CloudNet), Tokyo, 2018, pp. 1-4, doi: 10.1109/CloudNet.2018.8549505.
  3. Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh, and Van Jacobson, “BBR: Congestion-Based Congestion Control,” Queue 14, 5, pages 50 (October 2016), 34 pages, 2016. DOI: https://doi.org/10.1145/3012426.3022184


  4. Biography:

    Kanon Sasaki received his B.E degree from Kogakuin University in 2019. He is currently a Master’s student in Electrical Engineering and Electronics at the Graduate School, Kogakuin University.



    P-6 "A Study on Service identification by SNI"
    Yuto Soma, Saneyasu Yamaguchi, Aki Kobayashi, Kogakuin University, Japan, Masato Oguchi, Ochanomizu University, Japan, Akihiro Nakao, Shu Yamamoto, The University of Tokyo, Japan

    Yuto Soma

    I. INTRODUCTION
    Identifying the service from an IP flow in a network element enables many things, for example providing higher priority to an emergency service's flow at the element during a severe disaster. Unfortunately, many of the recent are encrypted with TLS (Transport Layer Security), especially TLS 1.2, and extracting information from these encrypted data is difficult. Yamauchi et al. [1] proposed a method for identifying service from TLS 1.2 flows by analyzing the occurrence of SNI (Server Name Indication) described in the unencrypted part. However, the occurring SNIs may depend on the capturing time and this dependency may severely decrease the identification occurrence.
    In this paper, we evaluate the accuracy of service identification of the method with the captured traffic in different two days and discuss the effect of the capturing season and data freshness on identification accuracy.

    II. Related work
    The identifying method [1] is composed of the preliminary investigation and identification phases.
    In the preliminary investigation phase, the method records the occurring SNIs on access to each service and creates a database of occurrences of SNIs at accessing service as shown in Fig. 2. When a user accesses service X, multiple TLS sessions are established as shown in Fig. 1. Each TLS session has the SNI filed. In the investigation phase, the method analyzes the occurrences of SNIs of an unidentified flow for identification and calculate of the likelihood of every service using Bayesian inference.

    III. Evaluation
    In this section, we evaluate the identification accuracies of the method with the 15 Google services that are the same as those in [1]. For evaluation, we accessed each service 100 times with Web Browser Mozilla Fig 52.2 and captured their traffic. We performed capture in Dec. 2018 and Aug. 2019. We used the traffic of the first 90 accesses of each service for training, i.e. creating a database, and those of the other 10 accesses for testing, i.e. identification.
    Fig. 3 shows the accuracy of identification with training and testing with the data in 2018 and that with the data in 2019. Fig. 4 shows the average accuracy of identification with training and testing with the data in 2018, that with training and testing with the data in 2019, and that with training with the data in 2018 and testing with the data in 2019. Fig. 3 shows that the method can suitably identify the service independent of the season if the training data is fresh. Fig. 4 implies that the method cannot identify the service correctly if the training data is not fresh. from these results, we can conclude that the training data from the captured data should be updated for k eeping them fresh.

    IV. Conclusion
    In this paper, we focus on a service identification method based on SNI analyses and showed that the method requires to keep the training data up to date.


    P-6_Fig1

    Fig.1 Traffic Model of Accesses to Service


    P-6_Fig2

    Fig.2 SNI Occurrence Vector


    P-6_Fig3

    Fig.3 The Identification Accuracy


    P-6_Fig4

    Fig.4 Average of identification accuracy


    Acknowledgment:
    This work was supported by JSPS KAKENHI Grant Numbers 17K00109, 18K11277 and JST CREST Grant Number JPMJCR1503, Japan.


    References:

    1. H. Yamauchi, A. Nakao, M. Oguchi, S. Yamamoto and S.Yamaguchi, "A Study on Service Identification Based on Server Name Indication Analysis," 2019 Seventh International Symposium on Computing and Networking Workshops (CANDARW), Nagasaki, Japan, 2019, pp. 470-474, doi: 10.1109/CANDARW.2019.00089.



    Biography:

    Soma Yuto received his B.E degree from Kogakuin University in 2019. He is currently a Master’s student in Electrical Engineering and Electronics at the Graduate School, Kogakuin University.



    P-7 "A Study on KVS Caching by Application Switch"
    Tomoaki Kanaya, Hiroaki Yamauchi, Saneyasu Yamaguchi, Kogakuin University, Japan, Akihiro Nakao, Shu Yamamoto, The University of Tokyo, Japan, and Masato Oguchi, Ochanomizu University, Japan

    Tomoaki Kanaya

    I. INTRODUCTION
    Programmable switches are increasing its importance. We proposed an application switch in which some application functions are implemented inside [1] and evaluated the performance of KVS (Key-value store) supported by the switch with simple loads such as uniform distribution[2]. In this paper, we evaluate the performance of KVS supported by an application using more practical loads with Zipf distribution of k=1 to 2.

    II. APPLICATION SWITCH SUPPORTING TCP ACK
    Fig. 1 illustrates the overview of the Application Switch. In the normal case, a request from a client is processed by a server. In the case of a network with an application switch implementing caching, a request is processed in the application switch. Namely, a reply packet is created in the switch and it is sent from the switch to the client. This behavior causes the difference between the Sequence and Acks numbers recognized by the client and server. Thus, the switch modifies these numbers at forwarding after a cache hit.

    III. PERFORMANCE EVALUATION
    Here, we present the performance evaluation of the proposed method with various access skews of loads. We measured the reply times of Cassandra KVS supported by an application switch. A caching function for data retrieving queries, i.e. SELECT queries, implemented in the switch. The switch deeply inspects the payload of each packet, which is called DPI (Deep Packet Inspection), extracts the query in it, and creates a reply packet if the requested data are stored in the cache in the switch. We issued 1000 SELECT queries and evaluated the time to complete each query. The KVS database has 1,000 key-value pairs whose value size is 100 bytes. The cache size is 100 pairs. The target to retrieving was randomly selected from the 1000 pairs and its distribution obeyed the Zipf distribution. The probability of the nth pair is = 1 n m = 0 1000 1 m k . k ranged from 0.5 to 2.0.
    Fig. 2 depicts the results. These indicate that the performance improves as the k increases, i.e. the skews become larger. We can conclude that the proposed application switch is useful especially for practical cases, i.e. cases of access with a strong locality.

    IV. CONCLUSION
    In this paper, we introduced an application switch supporting KVS with a cache. We then evaluated the switch and showed that this is effective especially in case of practical access patterns such as accesses with the locality.


    P-7_Fig1

    Fig.1 The overview of optimization of Application Switch


    P-7_Fig2

    Fig.2 Experimental results with the normal method and proposed method.


    Acknowledgment:
    This work was supported by JST CREST Grant Number JPMJCR1503, Japan. This work was supported by JSPS KAKENHI Grant Numbers 26730040, 15H02696, 17K00109.


    Reference:

    1. T. Kanaya, H. Yamauchi, S. Nirasawa, A. Nakao, M. Oguchi, S. Yamamoto, and S. Yamaguchi, "Intelligent Application Switch Supporting TCP," IEEE Int. Conf. Cloud Netw., Tokyo, Japan, 2018
    2. T. Kanaya, A. Nakao, S. Yamamoto, M. Oguchi, and S. Yamaguchi, "Intelligent Application Switch and Key-Value Store Accelerated by Dynamic Caching," the workshop DBDM 2020: Distributed Big Data Management in the 2020 IEEE 44th Annual Computer Software and Applications Conference (COMPSAC), 2020



    Biography:

    Tomoaki Kanaya received his B.E. degree from Kogakuin University in 2019.
    He is currently a master course student in Electrical Engineering and Electronics, Kogauin University Graduate School.



    P-8 "Throughput Fairness of TCP BBR with Fixed BtlBw"
    Kouto Miyazawa, Saneyasu Yamaguchi, and Aki Kobayashi, Kogakuin University, Japan

    Kouto Miyazawa

    I. INTRODUCTION
    In this paper, we show that the throughputs fairness of TCP BBR connections is low in the case of multiple TCP BBR connections communicate sharing a bottleneck link. We then present performance evaluation of a modified TCP BBR whose BtlBw is fixed and imply that this unfairness is caused by unsuitably estimated BtlBw.

    II. Related work
    A. TCP BBR
    TCP BBR is a new TCP congestion control algorithm proposed in 2016 by Cardwell et al. This assumes that it is ideal to set its congestion window size to Bottleneck-Delay Produce(BDP). For obtaining the BDP, TCP BBR estimates the bandwidth of the bottleneck link, which is called BtlBw, and the physical propagation delay time, called RTprop, based on several values, such as Round-trip Time (RTT) and delivery rate, measured during the communication.

    III. LENGTH OF PERFORMANCE CYCLE
    For discussing the mechanism of this unfairness, we evaluate the throughputs of TCP BBR connections with iperf on a network on which two data sending machines and one data receiving machine were running. Each sending machine established ten connections. We executed two experiments. One is an experiment with the original TCP BBRs in both sending machines. The other is experiment with the original TCP BBR in a sending machine and the modified TCP BBR whose BtlBw was fixed in the other sending machine.
    Fig. 1 shows the results of the first experiment. Fig. 2 and 3 show the results of the modified and original TCP BBRs of the second experiment, respectively. These results indicate that the throughput fairness of the original TCP BBR is severely low and that of the TCP BBR with fixed BtlBw is remarkably higher.

    IV. Conclusion
    In this paper, we showed that the throughput fairness of the original TCP BBR connections is low and those of the modified TCP BBR whose BtlBw is fixed is remarkably higher. This implies that this unfairness is caused by the unsuitable estimation of BtlBw.


    P-8_Fig1

    Fig.1 Throughputs of 20 TCP BBR connections (original)


    P-8_Fig2

    Fig.2 Throughputs of 10 TCP BBR connections (fixed BtlBw)


    P-8_Fig3

    Fig.3 Throughputs of 10 TCP BBR connections (original)


    Acknowledgment:
    This work was supported by JSPS KAKENHI Grant Numbers 17K00109, 18K11277.
    This work was supported by JST CREST Grant Number JPMJCR1503, Japan.


    References:

    1. Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh, and Van Jacobson, “BBR: Congestion-Based Congestion Control,” Queue 14, 5, pages 50 (October 2016), 34 pages, 2016. DOI:https://doi.org/10.1145/3012426.3022184



    Biography:

    Kouto Miyazawa received his B.E. degree from Kogakuin University in 2019. He is currently a master course student in Electrical Engineering and Electronics, Kogauin University Graduate School.



    Friday 11, Sept. 2020

    Technical Session
    Tech. Session (3): Networking with Machine Learning
    Friday 11, Sept. 2020, 9:40-11:20
    Chair: Akira Hirano, Tokyo Denki University, Japan
    T3-1 "Analysis of the Optical Data Center Network Slicing Strategy in Support of Machine Learning Applications with Multiple Patterns"
    Cen Wang, Noboru Yoshikane, and Takehiro Tsuritani, KDDI Research Inc., Japan

    Cen Wang

    Great progresses have been witnessed in machine learning era. In the majority of machine learning jobs, to train and evaluate a model, usually a great amount of data would be processed. To pursue efficiency, distributed frameworks are deployed on data center (DC) to break the computing resource limitation of a single CPU/GPU. To this end, the data center network (DCN) will transport massive intermediate results among CPUs/GPUs which are plugged into dispersive servers. These CPUs/GPUs employ message passing interface (MPI) to realize target operations (e.g. all-reduce, reduce and broadcast). We call an operation of the MPI as an MPI job. An MPI job is composed by several communication steps, in the network layer implementation, different steps require various communication patterns. When multiple MPI jobs are concurrent in the DCN, congestions could deteriorate the completion time of any existing MPI job. Thus, lowering the completion time of an MPI job is the most important optimization goal.

    Previously, based on electrically switched DCN, researchers of computer science would prefer to collect CPUs/GPUs within a rack to avoid insufficiently inter-rack bandwidth (e.g., COOL proposed by the University of Waterloo [1]). But this strategy may not be useful when the intra-rack residual computing resources are not enough for an MPI job. In addition, intra-rack topology is usually tree-like, which does not naturally adapt the jobs’ diverse communication patterns. On the other side, introducing optical switching into data center network (DCN) has enabled high throughput and low latency. Moreover, spatial cross connections in optical switching matrix are accompanied with topology flexibilities. Taking these advantages, proper slicing of optical DCN would promote performances of the MPI jobs. In a slice for an MPI job, we do not need to consider the placement of the CPUs/GPUs, oppositely, we can assign just enough bandwidth and design matched topology. As a result, the concurrent MPI jobs would be isolated so that less congestions would happen.

    In this work, a network slicing strategy (as shown in Fig. 1(a)) has been proposed based on a hybrid (i.e., OCS and EPS) DCN architecture (as depicted in Fig. 1(b)). In a simulation way, we generate multiple requests to imitate the MPI jobs with different patterns (as listed in Fig. 1(c)) to evaluate the performance of the proposed strategy in lowering the average communication time (CT) of the MPI jobs. In the network slicing, we design a slice as a sequential (i.e. time-dependent) one because of the steps changing in an MPI job. The results have shown the successful accelerations of the MPI jobs.


    T3-1_Fig1

    Fig.1 Optical data center network slicing for the MPI jobs with multiple patterns.


    References:

    1. Zuhair AlSader, “Optimizing MPI Collective Operations for Cloud Deployments”, A thesis presented to the University of Waterloo.




    Biography:

    Cen WANG, received his Ph. D degree on electrical engineering from Beijing University of Posts and Telecommunications in 2019, now is with KDDI Research Inc. as an associated researcher. His research interests span in network modeling, AI + networking and networking for AI applications. The current number of his publications is over 40.



    T3-2 "Deep Reinforcement Learning based Computing Job Scheduling in Optical Cloud/Edge Data Center Network"
    Xiong Gao, Beijing University of Posts and Telecommunications, China, Cen wang, KDDI Research Inc., Japan, and Hongxiang Guo, Beijing University of Posts and Telecommunications, China


    Cen Wang

    I. INTRODUCTION
    Edge computing is expected to assist traditional cloud computing in 5G and IoT era to pursue lower delay and better user experience. In order to reduce delays caused by accessing the faraway cloud DC, a large amount of small-scaled edge DCs are expected to be deployed close to users. However, only run computing jobs on edge DCs may suffer from insufficient computing resources. Our previous work has verified that combinedly using cloud DC and edge DCs can make a balance between enough computing resources and lowering delays [1]. Based on such conclusion, in this work, we further design scheduling strategy to optimally accommodate computing jobs with diverse demands, which is regarded as an APX-Hard multi-dimensional bin packing problem [2]. Heuristic approaches cannot solve this problem in an optimum way. Current deep reinforcement learning (DRL) shows great progress in network scheduling problems, which inspire us to adopt the DRL method to schedule jobs among cloud DC and edge DCs.

    II. DRL LEARNING ON A GIVEN NETWORK ARCHITECTURE
    Fig. 1(a) shows a common network architecture to connect a cloud DC and multiple edge DCs. In the data plane, the Edge DCs are interconnected through electrical switching. Meanwhile, any edge DC can reach the cloud DC through optical transportation network. End-users or IoT devices are connected to their nearby edge DCs applying for resources to complete their computing jobs. In the control plane, both electrical and optical switches are connected and configured by the SDN controller. Our DRL engine to optimize the jobs is implemented in an orchestrator, which will generate scheduling strategy for the SDN controller. In the DRL engine is a deep Q-learning network (DQN) model which has three parts: states, actions and rewards. The states are defined as the available network resources, the available computing resources and the resource demand of each job. We call an allocation of a job to a proper DC as an action. Job slowdown is used to evaluate the performance of job scheduling mechanisms as previous study does [3]. The Job slowdown is defined as 𝑆! = 𝐶!⁄𝑇! , where 𝐶! is the completion time of the job and 𝑇! is the ideal duration of the job. Since we aim to reduce the average job slowdown, the reward can be set as 𝑅 = Σ"#$−1⁄𝑇", where 𝐽 is all the jobs currently in the system. Therefore, according to the network architecture shown as Fig. 1(a), to pursue an online learning, the procedures are as follows. The application manager gathers jobs’ demands, and the SDN controller and the computing resource manager obtains available network resources and the available computing resources, respectively. The gathered information is retrieved by the DRL engine as the states. Then, the DRL engine discovers a scheduling decision by learning. The decision is translated and conducted via the SDN controller. The job completion time under the current decision will be fed back to the DRL engine, the feedback could help the DRL engine update its neural network parameters periodically to search a better decision. By enough epochs of learning, the DRL engine could find an optimum scheduling strategy.
    III. EXPERIMENT EVALUATIONS
    We evaluate our DRL scheduler under different loads (i.e., the numbers of the jobs with various demands). As is shown in Fig.1(b), DRL outperforms previous methods like Shortest Job First (SJF) and Packer [2] in average slowdown.

    References:

    1. Xiong Gao, Cen Wang, et al , “Demonstration on Computing Patterns Adjustment for Lower Job Completion Time in Metro Optical Inter-Cloud/Edge DC Network,” accepted by CLEO-PR2020.
    2. R. Grandl , G. Ananthanarayanan, S. Kandula , et al, “Multi-Resource Packing for Cluster Schedulers,” ACM SIGCOMM Computer Communication Review, vol. 44, no. 4, 2014.
    3. H. Mao H, M. Alizadeh, Menache Ishai, et al, “Resource Management with Deep Reinforcement Learning,” ACM Workshop, 2016.


    T3-2_Fig1

    Fig.1 (a). Network architecture; (b) Average slowdown under DRL, SJF and Packer.



    Biography:

    Xiong Gao received the B.S. degree in communication engineering from Chongqing University of Posts and Telecommunications, Chongqing, China, in 2016. He is currently pursuing the Ph.D. degree in electronic science and technology at Beijing University of Posts and Telecommunications (BUPT), Beijing, China. His research interests include routing algorithm, job scheduling and resource management in data center network and optically interconnected computing system.



    T3-3 "Advanced Machine Learning-assisted Anomaly Detection Framework for BGP Networks"
    Genichi Mori, Junichi Kawasaki, Yusuke Suzuki, and Tomohiro Otani, KDDI Corporation/KDDI Research, Inc., Japan

    Genichi Mori

    I. Introduction
    In August 2017, a large-scale network failure occurred in Japan due to BGP anomaly updates, which affected many internet users. For such a network using BGP, service providers (SPs) are required to detect anomaly updates as soon as possible in order to minimize the failure impact. However, the BGP updates are actually generated even when the networks are normal. In addition, the update information is very complicated because it includes the updates caused by the other SP’s operation. Therefore, it is difficult to detect BGP anomalies with conventional monitoring systems.
    In order to address this issue, we propose and develop an anomaly detection framework with machine learning (ML) technology. It has the function of dataset creation and management which we proposed in the previous work [1], the abstraction function for summarizing and visualizing all the routes, and the machine learning function for detecting the anomaly updates from other SPs. In this presentation, we will introduce our proposed framework and show the evaluation result of CNN model on our BGP network testbed.

    II. Overview of Advanced Machine Learning-assisted Anomaly Detection Framework
    Figure 1 shows the detailed implementation of proposed framework. In this framework, the automated fault generation function intentionally generates a failure in the network under test to promptly gather training dataset. The scenario defines each fault action such as injecting anomaly routes that are close to the actual network failure. By running a large number of such scenarios, the framework accumulates many samples of the dataset. The dataset consists of performance monitoring (PM) data and the route data collected from the network elements by the scalable monitoring function. When the dataset creation is completed, the framework prepares the training data required for machine learning.
    In the machine learning phase, the abstraction function processes the full routing information in each sample to create summarized matrix format data. The data consists of the number of prefix, max AS path length, and max MED value per subnet mask (e.g., /16). After that, the route visualization function shows the anomaly routes as image by comparing the summarized matrix format data obtained in failure cases and the data obtained in normal. Finally, the framework inputs the images to convolutional Neural Network (CNN) model, which is widely used as an image classification model, and train it. Finally, we evaluate whether the trained model can detect anomaly updates.

    T3-3_Fig1

    Fig.1 Proposed Framework

    Acknowledgement:
    This work was conducted as part of the project entitled "Research and development for innovative AI network integrated infrastructure technologies(JPMI00316)" supported by the Ministry of Internal Affairs and Communications, Japan.


    References:

    1. Genichi Mori, Junichi Kawasaki and Masanori Miyazawa, "Machine Learning-assisted Network Analysis Framework for anomaly detection and RCA toward 5G", iPOP2019, May 2019.



    Biography:

    Genichi Mori received his B.S. in electronic elengineering from Seikei University in 2007 and M.E. in information and communication engineering from University of Electro-Communications in 2009. He joined KDDI Corporation in 2009 and has been engaged in operation and development of IP core network systems, L3 VPN systems. Since 2017, he has been working in KDDI Research, Inc., and engaged in network operation automation.



    T3-4 "Relearning Architecture for Autonomic Resource Arbitration in SFC Platform"
    Takahiro Hirayama, and Ved P. Kafle, NICT, Japan


    >Takahiro Hirayama

    Network function virtualization (NFV) techniques enable us to implement the softwarized network functions on general purpose hardware equipment. Service function chaining (SFC) is a framework to deploy necessary network functions over NFV infrastructure [1,2] so that the virtual network can satisfy the application service requirements . A service function chain contains a series of virtualized network functions (VNFs), such as load balancer, firewall, intrusion detection system (IDS), and contents server. The automation of the SFC deployment and periodical adjustments of computational (e.g. CPU and memory), storage and networking (e.g. bandwidth) resources for each network function (NF) would help in realizing efficient and stable service provisioning to customers in time-varying network conditions. In this paper, we introduce our research work and experimental results of the autonomic resource (e.g. CPU) arbitration mechanism for SFC platform.
    As a proof of concept (PoC), we have developed an SFC platform, as shown in Fig. . We have made it complaint with the SFC architecture specified in RFC 7665 and network service header in RFC 8300.. In this SFC platform, packets sent from end hosts are encapsulated by the Service Classifier (SC) node with service function headers (SFHs) that denote SFCs to which the packets belong. The SC then distributes the encapsulated packets to their relevant paths. As the first step to the validation of our implementation of the SFC platform, we have installed our autonomic resource arbitration mechanism that allocates appropriate CPU resources. Our autonomic resource arbitration system continuously monitors traffic arriving to each VNF and predicts the volume of traffic of the near future. According to the result of traffic prediction, the system reallocates the adequate number of CPU cores to each VNF. Our prediction engine includes long- and short-term relearning mechanisms so that it can stay up to date with traffic trend changes. For the long- and short-term relearning, we apply forgetting and dynamic ensemble process, respectively [3]. In the forgetting process, a number of regressors are refreshed by the traffic data gathered during a long time (ex. weekly or monthly). Meanwhile, in the dynamic ensemble process, only one regressor deployed on each server is retrained by the most recent data gathered within a short time (ex. a few days). We show that these relearning mechanism uses CPU resources efficiently. And also, we exhibit the results of our first experiment of the prediction-based autonomic resource arbitration in SFC platform. Figure 2 shows the results of resource arbitration based on the demand prediction. Fig. 2 (a) shows the dynamic allocation of CPU cores as traffic fluctuates, and Fig. 2 (b) show the improvement of CPU utilization by the application of our proposed resource arbitration mechanism.


    T3-4_Fig1

    Fig.1 PoC implemention of SFC platform and autonomic resource arbitration system.


    T3-4_Fig2

    Fig.2 Experimental results.


    References:

    1. IETF RFC 7665, "Service Function Chaining (SFC) Architecture," Oct. 2015.
    2. IETF RFC 8300, "Network Service Header (NSH)," Jan. 2018.
    3. T. Hirayama et. al, IEEE NetSoft, July 2020.



    Biography:

    Takahiro Hirayama received a M.S. Degree and Ph.D. Degree from Osaka University in 2010, 2013, respectively. In April 2013, he joined National Institute of Information and Communications Technology (NICT) and is now a researcher of Network System Institute in NICT. His research interests are in complex networks, optical networks, software defined networking (SDN), and autonomic network management. He is a member of IEICE and IEEE.



    Gold Sponsor Session
    Gold Sponsor Session(2)
    Friday 11, Sept. 2020, 11:20-12:00
    Chair: Hirofumi Yamaji, TOYO Corporation, Japan
    G-3 "Who is the Integrated Automation Hero?"
    Chang Kyu Kim, UBiqube, Japan
    Chang Kyu Kim



    Biography:

    Chang Kyu Kim, Head of Sales & Partnerships, Asia & Far East, Ubiqube
    - Chang-Kyu Kim leads UBiqube’s APAC business development and sales, focusing on developing an ecosystem where cross industry alliances and partnerships are necessary to build multi-domain, multi-vendor service automations in the connected world
    - Chang-Kyu has 25 years of track record of building businesses from scratch to global presence as a key member of startups in many areas including Neural Networks, Cybersecurity, and Process optimization.



    G-4 "Activities for Future Flexible Transport Network"
    Aki Fukuda, NTT, Japan
    Aki Fukuda

    Biography:

    Aki Fukuda received the B.E., M.E. degrees from Akita University in 2007, 2009, respectively. She joined the Nippon Telegraph and Telephone Corporation (NTT) in 2009. Since then, she has carried out research of control and operation on next generation IP and optical transport network. She is currently researching about a study SDN technology for wide-area IP and optical transport network. She is a member of IEICE.

    Technical Session
    Technical Session (4): Network Resiliency and Automation
    Friday 11, Sept. 2020, 13:00-14:40
    Chair: Yuta Wakayama, KDDI Research, Japan
    T4-1 "Applying Partial Key Grouping-based Multipath Routing to a Highly Reliable Core Network"
    Taichi Okumura, Masaki Murakami, Yoshihiko Uematsu, Satoru Okamoto, and Naoaki Yamanaka, Keio University, Japan

    Taichi Okumura

    The transmission capacity of optical core network increases from 100Gb/s to 400 Gb/s and will continue to be larger. Along with this, protection methods such as Dedicated Path Protection widely used also cost a lot of network resources. Therefore, technologies which realize fault tolerance equivalent to protection with smaller network resources are required. As one of that technologies, researches on multipath routing using multiple paths in parallel and realizing failure recovery at a relatively low cost are focused [1]. Multipath routing utilizes a few transfer routes unlike single-path routing, so ingenuity for packet distribution is required. Generally, it is divided into two types: flow base and round-robin base. In flow-based distribution, same flow is transferred on the same route by using a hash. On the other hand, round-robin based distribution forwards packets to the routes in sequence. Although flow-based distribution is desirable to prevent packet reordering, this method may transfer multiple flows with a large data size via the same path and exceed path capacity. Hence it cannot be used effectively. Using all the reserved capacity, multipath routing adopts round-robin based distribution by providing a buffer for packet order control on the receiving side. However, when a path fails, most of the flows are transferred to the defective route and significantly degrade throughput due to packet loss in the case of TCP.
    In this paper, we propose a packet distribution method that effectively utilizes the capacity like round-robin based and reduces the number of TCP flows affected by a failure. We apply Partial Key Grouping (PKG) [2], which is one of the stream management methods in the distributed stream processing engine, to packet distribution. PKG is located between flow base and round-robin base and performs load balancing while limiting processing engines for data with the same key. The reason for using PKG is that it can reduce the number of damaged flows by limiting the flow paths even if one path fails and distribute the load between paths. In addition, when considering packet distribution, it is necessary to pay attention to large size flow called Elephant Flow to prevent packet loss caused by transmitting them on the same route. The proposed method designates paths for them to enable more further load balance. An example of that is illustrated in Fig 1. We used the fairness index as an index for load balancing in simulation. The proposed method was able to reduce the number of traffic damage at the failure to about 50% while maintaining the fairness index equivalent to round-robin based.


    T4-1_Fig1

    Fig.1 An example of proposed packet distribution


    Acknowledgement:
    This work is also partly supported by the R&D of innovative optical network technologies for supporting new social infrastructure project (JMPI00316) funded by the Ministry of Internal Affairs and Communications Japan.


    Reference:

    1. S. Sekigawa, S. Okamoto, N. Yamanaka and E. Oki, "Expected Capacity Guaranteed Routing based on Dynamic Link Failure Prediction," 2019 Workshop on Computing, Networking and Communications (CNC), pp. 170-174, Honolulu, HI, USA, Feb, 2019.
    2. M. A. U. Nasir, G. D. F. Morales, D. Garc´ıa-Soriano, N. Kourtellis, and M. Serafini, “Partial key grouping: Load-balanced partitioning of distributed streams,” ArXiv, vol. abs/1510.07623, Oct, 2015.



    Biography:

    Taichi Okumura received his B.A degree in 2020 from Keio University.



    T4-2 "Next-Generation Closed-Loop Automation - An Inside View"
    Laurent Ciavaglia, Nokia, France

    Closed loops are essential means to achieve distributed end-to-end network automation and provide greater levels of operations autonomy, assurance and optimization. Yet, building flexible and interoperable automation solutions poses several challenges for closed loops design and specifications: how to define, compose and tune multi-vendor closed loops in end-to-end and local management domains? how to coordinate between interacting loops? How to dynamically manage levels of supervision and autonomy of the closed loops? To overcome these challenges, the ETSI ZSM ISG currently develops specifications (ETSI ZSM GS 009 Series) on generic enablers for closed-loop management and operations, leveraging modularity (Service Based Architecture), intent-based and model-driven approaches to provide operators with unprecedented means to assemble and operate made-to-order, multi-vendor closed-loops, means to coordinate and mitigate conflicts between interacting closed loops and means to life-cycle manage diversified closed loops in a unified approach.
    The work also investigates longer-term evolutions for next-generation closed-loop automation by incorporating advanced learning and cognitive capabilities at every stage of the closed loops.


    T4-2_Fig1

    Biography:

    Laurent Ciavaglia is Innovation and Standardization Expert at Nokia where he works at inventing future network automation technologies with focus on intent-driven, zero-touch and artificial intelligence techniques. Laurent serves as co-chair of the IRTF Network Management Research Group (NRMG) and participates in standardization activities related to network and service automation in IETF and ETSI.



    T4-3 "Node structure for high-availability and efficient metro ring network with optical burst signal"
    Kana Masumoto, Masahiro Nakagawa, Toshiya Matsuda, and Kazuyuki Matsumura, NTT, Japan

    Kana Masumoto

    We are investigating optical time division multiplexing networks for low-cost metro networks [1]. A burst-mode optical amplification technology, especially using Erbium Doped Optical Fiber Amplifiers (EDFAs) which are generally used in transport systems, is necessary for achieving metro-network distance. Burst-mode amplification using EDFA causes an overshoot with transient response, so a method of suppressing the overshoot is essential [2].

    Gain clamping is a known method of suppressing the overshoot with standard EDFAs, which has been widely analyzed on access networks [3]. For example, an EDFA and a light source for gain clamping have been applied between the OLT and splitter, and optical reach has been successfully extended on long-reach PONs [3]. However, on metro networks, node configuration in consideration of gain clamping has not been researched. Moreover, it is unreasonable to directly apply the abovementioned long-reach PONs approach to metro networks. This is mainly because the amount of device resources increases with the number of nodes on metro networks (Fig. 1 (a)), and such increases can affect network cost in terms of not only CAPEX but also OPEX. Please note that maintenance and failure recovery-related OPEX strongly depends on network availability.

    Thus, we propose a node configuration suitable for optical burst metro networks (Fig. 1(b)). The proposed configuration uses gate switches such as SOAs and couplers, which can eliminate the need for additional clamping sources. In ring networks with redundant configuration, optical signals from the transponder (TR) for clockwise transmission can be used as counter-clockwise clamping signal when the TR transmits “almost continuous” signals consisting of the original burst and void-filling burst signals. Moreover, the gate can switch to pass or cut off the transmitting light. Therefore, this configuration can change the burst TR transmitting clamping signal remotely and instantaneously, which enables adaptive network reconfiguration. Furthermore, such remote and instantaneous reconfiguration can minimize the impact of failures.

    Here, we evaluate annual unavailable time of both the preceding and proposed configurations. The annual unavailable time is calculated using the MTBF value of each device, according to [4]. In this paper, we assume that MTTR is 2 hours and gain clamping reconfiguration in the case of a failure can be realized within 50 ms in the proposed configuration. As a result, Fig. 2 shows that the unavailable time of the proposed configuration is a fourth of the preceding configuration. Therefore, the result verifies that the proposed configuration can achieve high network availability.

    T4-3_Fig1

    Fig.1 Node configuration on optical burst metro networks

    T4-3_Fig2

    Fig.2 Evaluation result of network availability

    Reference:

    1. M. Nakagawa et al., ONDM 2018
    2. K. Masumoto et al, OECC 2018
    3. H.H. Lee et al., Opt. Express, vol. 22, 2014
    4. S. Verbrugge et al., JON, vol. 5, 2006



    Biography:

    Kana Masumoto received her B.E., M.E. degrees from Waseda University in 2013, 2015, respectively. She joined the Nippon Telegraph and Telephone Corporation (NTT) in 2015. Since then, she has been involved in research of the optical transmission system, especially optical burst amplifier on optical TDM networks.



    T4-4 "A study of failure control method between VNF/VNFM on NFV architecture"
    Kotaro Mihara, Minoru Sakuma, and Nobuhiro Kimura, NTT, Japan

    Kotaro Mihara

    In recent years, The Virtualization Technology is getting increased attention throughout the telecommunications industry, as a means of improving maintenance efficiency and reducing facilities. And for carrier systems, the area that applied of this technology is spreading, especially to SIP Servers that provide Telecommunication Systems.
    Among public carrier systems, public telephone services such as NGN is expected to accommodate emergency calls and be lifeline for emergency, so high service continuity is required. Especially the SIP servers which is the core of the network, is required to have much higher reliability than the NON Telecommunication Systems.
    For this reason, applications of current Telecommunication Systems have established fault control functions that realizes high reliability required for telephone services by highly cooperating with hardware.
    Considering virtualization about such systems, there are concerns that simple porting of current applications onto a virtualization infrastructure negatively affected current fault control functions by separating software from hardware through virtualization layer.
    Also, in the NFV model, it is common that failure control of the virtualization infrastructure is managed by VIM, so if Telecommunication Systems that manage and control both software and hardware integratedly by the application is installed on NFV-platforms as VNF, overlap of functions occurs between VNF and NFV-platforms.
    To address these issues, we first sorted out the role sharing between VNF and NFV-platforms for fault monitoring and control of Telecommunication Systems according to the ETSI model. Next, we focused on the use cases of software-hardware cooperation in fault monitoring and control of the current Telecommunication Systems. And then, by expanding the ETSI NFV model, we proposed the cooperation method for VNF / NFV-platforms that can achieve reliability equivalent to current systems.
    In addition, we extracted use cases that required cooperation between VNF and NFV-platforms, implemented commercial Telecommunication Systems products for these use cases, experimented on the NFV-platform, and detected other issues.


    T4-4_Fig1

    Major NGN High Availability functions




    Biography:

    Kotaro Mihara received his B.E. and M.E.. degrees from Waseda University, Japan, in 2008 and 2010, respectively. He joined NTT Network Service Laboratories in 2010. Since then, he has been involved in research and development activities in the platform of telecommunication systems.



    Special Invited Session
    Special Invited Session: Toward Next Generation Network/Cloud Control and Management
    Friday 11, Sept. 2020, 14:50-16:55
    Chair: Hideyuki Shimonishi, NEC, Japan
    I-1 "Control and management of transport network domains"
    Tomohiro Otani, KDDI, Japan

    Tomohiro Otani

    Biography:

    Tomohiro Otani is an executive director of KDDI Research, Inc.and responsible for R&D activities related with beyond 5G networking technologies, connected car, and IoT platform from 2016. Prior to that, he was a general manager of Operation Support System Development Department of Operations Sector in KDDI Corporation and responsible for developing the operation supporting systems (OSS) for fixed and mobile networks. He is a board of directors in Automotive Edge Computing Consortium (AECC). He has been a member of the technical programming committee of international technical conferences and was a co-chair of TPC of MPLS2007 and iPOP2017. He is one of authors of IETF RFCs (3471, 5146, 6825, 7025).


    I-2 "An intent-based approach to cloud resource management in accordance to cloud functional and non-functional requirements"
    Chao Wu, NTT, Japan

    Chao Wu

    Biography:

    Ms. Wu received her B.E. from Zhejiang University in 2009 and an M.E. from Waseda University in 2013. She has joined NTT Access Network Service Systems Laboratories since 2014, where she has been researching and developing management mechanisms for telecommunications, especially in the area of cloud and NFV. She is also an active member in standardization organizations including European Telecommunications Standard Institute (ETSI) and TMForum.


    I-3 "Data oriented networking accelerating digital transformation"
    Motoyoshi Sekiya, Fujitsu, Japan

    Motoyoshi Sekiya

    Currently many digital business happens utilize digital technology it is digital transformation. For accelerating, ICT system is expected to be more data driven. As for network, SDN enables automation based on defined programmed logic but not enough for data driven. For network to be data driven, network role have to change from just “transfer the data”to “deliver the exact data” Network will evolve from connecting computer to connecting data and functions by data driven. As one of approach for data driven network. I introduce Data oriented networking utilize blockchain and its application.


    Biography:

    He entered Fujitsu in 1990 and engaged in research for 10Gbps optical communication systems. From 1995 to 2010 he works for Fujitsu Limited and had been developed Optical transceiver module, WDM system and WDM Network design software. From 2010 to 2015, he works for Fujitsu labs of America and lead the research group in Optical networks and SDN/NFV. From 2015 to current,he works for Fujitsu Laboratories Limited and engaged in data exchange network. He has served as technical committee member of OFC/NFOEC, Globecom WS, AICT, ONDM, Softcom,BRAINS,5G-WF,iPop etc..


    I-4 "Automation of intent-based network designing with machine learning"
    Takayuki Kuroda, NEC, Japan

    Takayuki Kuroda

    Biography:

    He received M.E and Ph.D. degreed from the Graduate School of Information Science, Tohoku University, Sendai, Japan in 2006 and 2009. He joined NEC Corporation in 2009 and has been engaged in research on model-based system management for Cloud application and Software-defined networks. As a visiting scalar in the Electrical Engineering and Computer Science department at the Vanderbilt University in Nashville, he studied declarative approach of automated workflow generation for ICT system update. Now he is working on research for automation technologies for system design, optimization and operation.


    I-5 "Integrated infrastructure automation for smart networks"
    Hervé Guesdon, UBiqube, Ireland

    Hervé Guesdon

    Biography:

    I have 20 years of experience in the IP Service Provider Industry. I started my career at France Telecom R&D labs where I covered IP routing and high speed core networks projects (owning 2 patents). I later focused on the engineering of France Telecom core IP/MPLS backbone and on large scale VPN services deployment and associated management suites (OSS). My expertise spans IP Networking and Security, wireline and wireless, physical and virtual and associated service management tools (orchestration, assurance, security reporting, performance management, analytics (Big Data), OSS, etc..). As an early founding member I have been Leading UBIqube’s innovation. One particular area of focus is the next generation management software architecture needed for a smooth migration from legacy networking technologies to SDN, NFV, 5G and beyond. I am an active contributor in several industry groups and forums on the matter. Albeit often hoping from plane to plane, It is in Grenoble, France, that I live and find the inspiration to challenge some of our industry’s established ideas on how ‘things should be done'. I must admit the local gastronomy helps me in this daunting endeavour.


    Closing Session
    Friday 11, Sept. 2020, 16:55-17:10
    Closing by iPOP Organization Committee Co-Chair
    Satoru Okamoto,Keio University, Japan

>