Hi,
I've been simulating phase noise of a divide-by-2 circuitry with PSS & PNOISE.
The schematic is a simple master-slave based DFF drived by two inverters.
Also the input source is sinusoidal with phase noise profile extracted from crystal oscillator's simulation result.
![]()
The phase noise of interest is CK52M_1 & CK26M, and the PSS & PNOISE settings are as below:
PSS
![]()
PNOISE 52M
![]()
PNOISE 26M
![]()
PNOISE 52M Sampled(Jitter)
![]()
PNOISE 26M Sampled(Jitter)
![]()
Because I cared only about the timing(phase) modulation of the clocking signal, thus for timeavg method I just plot PM noise.
The overall results are shown below:
![]()
According to the simulation, there are some phenomenons that I cannot explain:
A. For Timeavg+PM method
1. 26-MHz phase noise is about 6dB better than 52-MHz one at very low frequencies(100~1kHz).
Both noises in this region is dominated by XO's phase noise.
2. However, when offset frequencies go further(1k~1MHz), the result is reversed and the noise gap is larger than 6dB.
And 26-MHz noise in this region is dominated by 1st inverter's flicker noise. (90%)
But for 52-MHz one, flicker only contributes about 37%.
3. For frequencies >1MHz, 26-MHz noise crosses behind 52-MHz one again and exhibits about 3dB better.
Both noises are dominated by 1st inverter's thermal noise.
B. For Sampled Jitter method
1. Both 26M and 52M show roughly same Jee.
freq < 1kHz, both are dominated by XO's noise.
1k < freq < 1MHz, both are dominated by 1st inverter's flicker noise (90%).
freq > 1MHz, both are dominated by 1st inverter's thermal noise.
2. If we plot Jee in "Edge Phase Noise" form (available in latest version of Virtuoso), both 26M and 52M noise are almost identical.
Since "Edge Phase Noise" is converted by Jee with respect to fundamental carrier freq. (26MHz), the intrinsic 52-MHz phase noise is supposed to be 6dB worse than 26-Mhz one.
Here is the problem that confuses me a lot:
Why does the noise performance of Timeavg+PM and Sampled Jitter give me quite different results?
For divided 26-MHz case, both methods at least show same noise level at freq. < 100kHz.
But for undivided 52-MHz clock, Timeavg+PM give me > 10dB improvement than Sampled Jitter one.
Which way should I trust?
Or if it is to implemented in frequency synthesizer, which method gives me much more accurate noise estimations?
Truly thanks for help.