Frame Forwarding Options
As we consider a modern-day switch. Let’s think back to the 1980s that’s when even ethernet bridges came out. Back into the 1980s, these networks are relatively slowly implementations of ethernet were 10Base2 or 10Base5 typically. Bridges were developed to limit collision domains. This allowed networks to scale somewhat you could have a collection of hubs on either side of a two-port bridge which as an example and collision saw on one side would not impact traffic on the other side. Architecturally the bridges worked a lot like computers of the day. If you had a Unix workstation it might be using a risk processor you might be using a computer that having this CISC processor. CISC, by the way, is complex instruction set computing risk is reduced instructions that computing. Cut through switching start forwarding that frame as quickly as it receives destination Mac address.
Frame Forwarding Decisions:
The bottom line is a frame forwarding decisions were made in software. As a result, this frame forwarding was relatively slow compared to what we have today. The big change came of the 1990s with the introduction of switches. We see that the Cisco catalyst 3750 series of switches. In the 1990s, even though these particular switches were not available but in the 1990s the introduction of switch use. We started that have relatively simply fast networks. Personally remember different vendors were coming involves and explaining how architecturally, their model of a switch was better than a competitors model of switch. We had all kinds of discussions about interframe gap time. And how frames forwards but in reality they all approached wire speed.
There were very fast and that was largely due to having ASIC application specific integrated circuits, these are the chips built into the ethernet switch that made the forwarding decisions no longer do we have to rely on a software program essentially running on a Cisco risk processer. Now we had dedicated circuitry that was in charge of making these forwarding decisions and switches like bridges they limit collision domains. But one of the big advantages besides speed is that switches tend to have a much higher port density then bridges. You have much more costs to per port.
When it comes to be the internet switch which as compared to ethernet bridge and the topic we want to discuss in this article is frame forwarding. We’ve got different frame forwarding options and in other words when a frame comes into one of those ethernet swtich ports. When it does that switch begin to forward the frame out of the egress or out of the outgoing port. We’re going to discuss is cut through switching. We’re going to contrast to that with a store and forwards switching and the third type that is really sort of a compromised between the two is something called fragment-free switching.
Cut through Switching:
It is a way for the switch to get minimal information about where the frame is destined. And start forwarding that frame as quickly as possible below we have layer 2 frames. In general, this is the basic structure of layer 2 frame. As soon as this frame begins to go in the switch. The switch is going to look at the Mac address just remember how big the Mac address is. It’s six bytes in size or 48 bits. This six byte Mac address is examined this which says well based on what switch previously learned. Switch knows that that Mac address lives all for a particular port. So as soon as the switch receive just the destination Mac address.
It can start forwarding or that out of an appropriate destination port. The benefit of this is we decrease latency inside of the switch. The switch does not have to see the entire frame before it starts to forward the frame. So the advantage of cut through switching would say it’s the faster than something like a stored and forwards switching.
However, because we start sending the frame almost immediately. We don’t know yet or the switch doesn’t know yet. If it’s a valid frame maybe, it’s messed up. We have an even come close to interrogating the frame check sequence here at the end to see if there is potentially an error in the frame. We’re just saying it’s going to this Mac address switch trusted a good frame and off. We go but to bring you have today just a little bit of cut-through switching.
Cut-Through Switching Popularity:
The cut through switching is starting to become fairly popular again in some cisco’s data centers switches Cisco’s a series of data centers which is called nexus switches. In some of that nexus, switches cut through switching is actually used. But it’s a more advanced flavor of cut through switching can look beyond just the destination Mac address. The switch can also look at the ether type and the ether type might say be an IPv4 frame.
And an IPv4 frame maybe there’s an access control list that that’s blocking this frame. Maybe it shouldn’t even be allowed to go through a switch. Maybe, it has a quality of service marking up it layer 3. And maybe we should interrogate that to see if we should treat this differently. So with this more modern approach to cut through switching that we might find in the Cisco Nexus 5000 series switches. For example, we look at the destination Mac address. We can also check the ether type time and if it’s something like IPv4 frame that can tell to go look into header information. In layer 3 headers to check out things like quality of service information or to see if we might be blocking this frame with some sort of access control that’s configured on this switch.
But basic cut through switching what you’re responsible for understanding for the CCNA exam is that with cut through switching as soon as the switch sees those first six bytes or as soon as it sees the destination Mac address. It can start forwarding that frame, therefore, decreasing latency. The downside is we might forward frame that’s a corrupted frame and we have at taken the time to check that out yet.
With a store and forward switching, a switch wants to make sure that a frame is valid. That it’s not corrupt before it starts to forward it out. After all that we just waste bandwidth for forwarding corrupted frames. So what the switch does it wait until it receives the entire frame. Not just the first 48 bits or not just the first six bytes or the destination Mac address. In other words, it waits until it receives the entire frame and it looks at this frame checks sequence. And the switch is going to the calculate using an algorithm. It’s going to calculate its own frame check sequence
so the switch creates and FCS of its own and if the FCS the frame equals. The FCS calculate over the switch can conclude with reasonably certain things that this is a valid frame. It is not being corrupted in transmission.
Therefore the switch feels comfortable forwarding out to the destination. And with the store and forward switching. It would point this out saying that we’re not going to waste bandwidth with by beginning to forward a frame that’s invalid. Anyway because we’re going to be checking the frame check sequence calculation. And embedded in the frame with our own calculation and if they match it must be a good frame. The downside is that this might be so a little bit slower than cut through switching. But we are waiting until we received the entire frame before we began forwarding that frame.
Fragment Free Switching:
We could almost to consider fragment-free switching as a compromised between cut through switching and store and forwards switching. One of the big benefits of cut through switching was that the switch didn’t have to see the entire frame. Before it started to forward it out towards its destination. However, it didn’t check to see if it was valid or not maybe a collision had occurred maybe the from check sequence didn’t match up. And with a stored forward switching, we would get the entire frame a stored in the switch make sure it was good and then start forward. Let’s suppose that it is a good that we made sure it was a valid frame but we were introducing some delay by heading to receive the entire frame.
Before we started to forward the frame and in fragment free the switching is somewhat of a compromise fragment-free switching states that most collisions that word are going to occur within a frame are going to occur within the first 64 bytes. Therefore, what fragment-free switching does it looks beyond and just the destination Mac address and beyond either type, it will actually look 64 bytes into our frame. Because most collisions occur in the first 64 bytes. If it doesn’t see that a collision has occurred in the first 64 bytes it has some assurance that this is probably a valid frame.
We didn’t go to the trouble to wait for the frame check sequence to arrive and do a calculation. So we’re not being as a thorough in our checking. As we would with stored forward switching but at least we have more assurance the frame is valid as opposed to cut through switching, to sum up. The fragment-free switching is going to examine the first 64 bytes of a frame. It’s over a frame. If it does not see that a collision has occurred. It was then going to forward that frame based on the destination Mac address.
In the 1980s:
- Relatively slow networks
- Bridges limits collision domains
- Complex instruction set computing (CISC)
- Reduces instructions set Computing (RISC)
In the 1990s:
- Relatively fast networks
- Application Specific Integrated Circuits (ASICs)
- Switches limits collision domains
- High port density