![]() 802.11r, appropriately called "Fast Basic Service Set Transition" ("Fast Transition" or FT for short) is the key player here, where 802.11v extends FT to be nearly seamless (down to a few ms for a transition) these technologies are a nice-to-have in most multi-radio ESS networks, but are essential for any real-time protocol over wireless (like VoIP), to avoid losing more frames than is absolutely required for the transition, allowing a call to continue while the station stays connected. With the advent of ESSID's it was quickly required to improve roaming, so a station could switch BSSID's with little effort and less downtime, originally, a station would need to disconnect from one BSS and connect to the next BSS, even though they were part of the same ESS this process was so slow that the connection would drop (and still does if it's set up this way), so we introduced 802.11k, r, and v, which all enhance the BSS to BSS transition in the same ESS. This works for large infrastructure deployments where you have a lot of APs to cover a larger area, and where you have many radios integrated into a single device (dual/triple band units that are both 2.4 and 5Ghz - this is where band steering functions too). So by using an ESSID, you have the ability to have many radios operating in the same ESS. The reason for the differentiation is the extended service set or ESS allows for many basic service set (BSS) devices to participate so you can have a number of BSS antennas in one ESS, but you generally every BSS is unique. As a result, the station learns the MAC before connecting, you select the network name (ESSID) to connect to and the PC will connect to that mac address (BSSID). When beaconing (announcements to say that stations can connect), both the BSSID and ESSID are announced in the message. Cisco System, “High CPU Utilization on Cisco IOS Software-Based Catalyst 4500 Switches”.So, the network name we're familiar with as an SSID is actually the ESSID the BSSID, is the mac address of the station.To be fair, here the source I considered to write this article: So next step is check the interface statistics looking for the highest traffic rate, and the two highest are the following:Ĭonsidering the GigabitEthernet4/2 traffic is 29 times higher than GigabitEthernet3/42 I shut down the interface GigabitEthernet4/2 and after this, the CPU utilization recover a normal rate, as we can see below:įinally we recognize the device connected to this switch port, it was a “time capsule” which is a harddisc which is shared in the network with Ethernet ports (4 ports) and with Wi-Fi, the device was disconnected and now I am going to investigate what application into this device was executing the DoS attack. It could be possible to have a device connected to a switch port sending lots of MAC address randomly and massively forcing the switch to spend all CPU capacity just to process the new MAC addresses. This takes place when in the switch receives an unknown source MAC address, it’s forwarded to the CPU for MAC address learning. ![]() This platform-process is active when a new MAC address has been learned and the adjacency table is rewritten. The platform-process “K5元Unicast Adj Tabl”, it is consumes high CPU usage. Next step is to see the platform-specific processes use the CPU under the context these two processes (HiPri and LoPri). Either “Cat4k Mgmt HiPri” and “Cat4k Mgmt LoPri” aggregate multiple platform-specific processes essential for management functiont on the Catalyt 4500. Isolating SWA1 and SWA4, juts shut down their uplinks, the CPU usage of SWC went down from 100% to 20%, at the same time SWA4’s CPU usage went down under 20% as well, the SWA1’s CPU usage kept its ratio at 99%, cheking SWA1’s CPU processes I got the same result.Ĭonsidering that the process “Cat4K Mgmt LoPri” has almost all CPU utilization (95,59%), which means that background and low-priority processes are the troublemaker. ![]() ![]() The unstable VLAN was across in two switches in the access layer and the collapse core layer, in both access layer the packet loss ratio is very high when I pinged the gateway interface of this VLAN (Its SVI, the interface VLAN in the core switch).Ĭhecking the CPU usage of all switches in the access layer and core switches I notice that just only in the switches were the unstable VLAN is standing, they are SWC, SWA1 and SWA2 are facing a very high CPU usage 99%, the rest of the switches has normal usage “< 20%” Tonight I faced a DoS attack, first sign is all VLAN has unstable connectivity the packet loss ratio was very high when I pinged from a PC in this VLAN, then other VLAN was impacted randomly.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |