Oscilloscopes versus Embedded Instruments

What’s cheaper, faster, and more powerful than an
oscilloscope, when it comes to validating high-speed signal integrity? Why, a
software application using embedded instruments, of course. How is this
possible?

Software applications are now widely available which use I/O
Built-In Self Test (I/O BIST)-based embedded instruments within silicon to
perform, for example, SerDes and memory signal integrity (SI) validation.
Software tools like this are able to observe and report directly on the signal
at the silicon receiver, as opposed to viewing what can essentially be a closed
eye on the board interconnect and reconstructing it using higher mathematics.
As I like to say, nowadays “the math is in the chip” when it comes to the
emphasis and equalization schemes (some of which are adaptive and change on the
fly!), so the best place to observe the waveform is from within the silicon.

Let’s take a look at three attributes of embedded
instrumentation versus external oscilloscopes, and show why the software way is
the best way:

1.       Embedded Instruments are more powerful

Oscilloscopes, due to time and cost (which are elaborated on
below), typically margin a couple of lanes of a bus at a time. So, the longest
and the shortest lanes are usually chosen, with the hope that these two will
exhibit the worst margins. But this is often a pipe dream (please excuse the
pun) – another arbitrary lane may have a defective capacitor, or run near a
noisy power delivery pin field. So there’s a lot of risk associated with such a
small sampling.

As well, oscilloscopes test margins under artificially ideal
conditions: nominal Process/ Voltage/ Temperature (PVT), and normal operating
traffic on a link. If the board or the silicon thereon is out-of-spec at any of
these outlier conditions, this will not be detected; at least not in the lab,
but probably by the customer.

Embedded instrumentation tools, on the other hand, have none
of these restrictions. All lanes in buses can be saturated with “synthetic”
traffic, using pseudo-random bit sequence (PRBS) worst-case patterns. This
gives software tools the ability to detect effects due to crosstalk and
inter-symbol interference (ISI) which will be missed by ‘scopes. And the
silicon supplier can provide eye masks which take PVT into account.

2.       Embedded Instruments are faster

With oscilloscopes, you need to get direct access to the
signal traces on the board. This can be a very lengthy process. It can add
several weeks to a design cycle to select the number of boards and lanes to
test, add test access to the board, solder on the probe heads, and then finally
perform the design validation.    

With embedded instruments, it’s just plug-and-go. Access to
the JTAG header on the board is all that is required. Saving a few weeks’ worth
of time can make a big difference when it comes to time-to-market.

3.       Embedded Instruments are cheaper

What’s the price of a good high-end scope these days, to
test signal integrity on PCIe Gen 3, QPI, SATA 3, etc? Let’s put it at $150,000
– $175,000 USD. And don’t forget those expensive amplifiers and probe heads:
let’s add another $25,000. And don’t forget, the more channels you want to
test, the more the price goes up.

So let’s say hypothetically that you want to validate SI on
five (5) boards, based upon getting a confidence level that silicon and board manufacturing
variances aren’t going to cause you problems. A sample size at least this large
is a good idea, based on Stephanie Akimoff’s whitepaper Platform
Validation using Intel IBIST
where it was empirically demonstrated that
signal integrity follows the silicon as well as the board. Let’s further assume
that a single test run on a single board takes say 100 hours, that you want to
test each board a handful of times to eliminate any procedural variation in the
test process, and you want to be able to react to any silicon version changes
throughout your prototype runs. To keep things simple, let’s ignore the need to
test for PVT effects (we’ll just keep our fingers crossed that our nominal
tests give us enough margin and a low enough bit error rate (BER) to cover off
any conditions our system will encounter in the field!).

So doing the math, the above would require at a minimum 5 X
5 X 100 hours = 2,500 test-hours, or about 100 test-days. This is impossible to
handle with a single scope and get all the testing needed to be done between
prototype runs, and between the final prototype run and volume manufacturing
deployment. So go ahead and buy more scopes at $200,000 each. That gets pretty
expensive, pretty fast.

Of course, software-based systems using embedded
instrumentation can be purchased at just about 1/10th of the above
cost. So buy a handful of these, save money, get more testing done, and get
your product to market faster.

The Wrap-Up

Of course, scopes are never going to go away completely.
They provide a good spot-check against embedded instrumentation-based
solutions, and are useful for other functions like compliance testing. You just
don’t have to buy as many of them.

Alan Sguigna