Talking to customers who are interested in lowest latency solutions, the term "Bit Time" comes up over and over again. But it is not clear what it actually means in terms of low latency and whether a lower "Bit Time" is always better than a higher one. So let's try to explore the background and the implications here.
In general, "Bit Time" stands for the time it takes to transmit a bit for a given network data rate. At Gigabit Ethernet (=1Gbps data rate) the "Bit Time" is 1/(1Gbps) = 1/109 = 1 nanosecond. In other words it takes 1 nanosecond to transmit a bit at 1Gbps. For higher data rates like 10GE the "Bit Time" is even shorter with just 0.1 nanoseconds = 100 picoseconds. But make no mistake - this does not say anything about the actual speed of the signal - it only indicates how many bits can be transmitted during a given second. The speed at which those bits travel over a fiber link is still subject to the maximum speed of light in a fiber, which is approximately 200,000 km/s.
So let's assume we would start a race between a GbE NIC (Network Interface Card) and a 10GbE NIC over a given fiber link. The first bit of the signal would leave both NICs at the very same time. But since the "Bit Time" with 10GE is shorter, the end of the corresponding bit would leave the 10GE NIC earlier than it would leave the GbE NIC. One could also think of it like the bit occupying a certain length on the transmission link. While the bit out of the 10GE NIC only occupies 2 cm, the bit out of the GbE NIC occupies 20 cm, but still their speed on the fiber is the same. So finally they arrive at the end of the link at the very same time. But while the 10GE link has already received the bit completely the GbE link is still waiting for the tail of its bit to arrive. In general the 10GE link has an advantage of 18 cm (20 cm - 2 cm) for each bit received, which translates into a time advantage of 9 picoseconds.