Signal Integrity

Power-aware Signal Integrity and EMI/EMC On High-speed Digital Chip-to-Chip Links

Signal Integrity header image 2

Time-aligning Your IBIS AMI_GetWave Function

Posted January 23rd, 2013 · Please leave a comment · Application Note

By Colin Warwick

There are two kinds of eye diagram metrics: those that just look at the received waveform (e.g. “density plots”) and those that compare the received waveform to the correct, transmitted bit pattern (e.g. BER contour, bathtub plots). For the latter type to work properly, the Channel Simulator and the models must works together to compensate for the principle delay (“latency”) between the Tx, channel, and Rx in order to compare bit number n sent with bit number n received. The compensation consists of first calculating the latency then adding an equal amount of delay to the correct transmitted bit pattern: I called this process time-alignment.

A lot of this time-alignment can be performed by the Channel Simulator, but the model builder has a role too.

Channel Simulator can time-align:

  • Arbitrary latency from the channel (because it models it as an impulse response whose length it knows)
  • Arbitrary latency in the two cases where a model is using its impulse response
    1. An impulse-only model running in a bit-by-bit (“time-domain”) simulation
    2. A dual-model (model having both AMI_GetWave and an alternate impulse response representation) running in a statistical simulation, because the impulse response is operative in that case
  • Typical latency in the two cases where a model is using its AMI_GetWave representation
    1. A GetWave-only model running in a bit-by-bit (“time-domain”) simulation
    2. A dual-model running in a bit-by-bit simulation, because the AMI_GetWave is operative in that case

What do I mean by “typical latency”? To answer that, we need to look at how Channel Simulator time-aligns the AMI_GetWave function. It can’t “see inside” the AMI_GetWave function because it is compiled code. Channel Simulator only see the input and output of the AMI_GetWave “black box.” Only the model builder really knows what the latency is, and at present the IBIS specification has no mechanism or parameter to communicate that information from model builder to Channel Simulator. (Maybe it should? BIRD, anyone?). All the Channel Simulator can do is the probe the operative AMI_GetWave function with an input stream and do a cross-correlation on the output. For efficiency, the cross-correlation has a finite “search window,” a 100 UI in the case of our Channel Simulator. If the latency lies within that window, Channel Simulator can determine it and time-align by adding a compensating delay to the Tx bit pattern before BER calculation. If not, then the comparison will be mis-aligned and the BER will go to a coin-flipping 0.5 (=10-0.3), even if the density plot shows an open eye.

Here’s where the model builder comes in.

If you want to build a model where the component’s behavior depends on a long history of the input sequence, then you must start outputting some kind of garbage value immediately even when the algorithm doesn’t have enough inputs in the pipeline. Then use the ignore_bits parameter to tell the Channel Simulator that the first n bits are garbage to be thrown away. (ignore_bits has other uses too, but this is one of them.)

Let’s look a made-up, simplified example to see how this works. Imagine an AMI_GetWave with a three-tap FIR with taps 1, 0.1, 0.1. It’s made-up because AMI_GetWave isn’t the most efficient way of modeling an FIR, and even if you did Channel Simulator could cross-correlation for a small delay. But it illustrates the principle.

Let’s build a “bad” version that doesn’t output immediately (inputs u, outputs y):

y1 = u3 * 1 + u2 * 0.1 + u1 * 0.1 // mainly u3
y2 = u4 * 1 + u3 * 0.1 + u2 * 0.1 // mainly u4
y3 = u5 * 1 + u4 * 0.1 + u3 * 0.1 // etc
y4 = u6 * 1 + u5 * 0.1 + u4 * 0.1 
y5 = u7 * 1 + u6 * 0.1 + u5 * 0.1 
y6 = u8 * 1 + u7 * 0.1 + u6 * 0.1
y7 = u9 * 1 + u8 * 0.1 + u7 * 0.1 
.
.
.

The output is delayed by the principle delay or latency of two samples (i.e. FIR length – 1 ).

Now let’s consider an “industry best practice” version that outputs something immediately:

y1 = u1 * 1 + dummy * 0.1 + dummy * 0.1 // mainly u1 but dummy is a guess. Ignore this one
y2 = u2 * 1 + u1 * 0.1 + dummy * 0.1 // mainly u2 but dummy is a guess. Ignore this one too
y3 = u3 * 1 + u2 * 0.1 + u1 * 0.1 // first good one
y4 = u4 * 1 + u3 * 0.1 + u2 * 0.1 
y5 = u5 * 1 + u4 * 0.1 + u3 * 0.1 
y6 = u6 * 1 + u5 * 0.1 + u4 * 0.1
y7 = u7 * 1 + u6 * 0.1 + u5 * 0.1 
.
.
.

The model builder must set ignore_bits to 2 because the model deliberately sends two made-up outputs at the beginning.

That’s it! Please add a comment if you need more info.

Hat tip to Fangyi Rao who explained this technique to me.

Tags: ···

Please leave the first comment so far ↓

Please leave the first comment.

Leave a Comment