Hi, there.
I've got an instances i_1 (cell_1) in testbench test_1 and and i_2(cell_2) in testbench test_2.
cell_2 has some minor differences from cell_1 but their symbols/pins are exactly the same (for this reason I have copied the extraction view from cell_1 to cell_2 for debug simulation purpose).
The setup of test_1 is 2 years older than test_2 and they are a bit different, but for all the inputs (reg settings and differential clk inputs) of i_1 in test_1 and all the inputs of i_2 in test_2 are almost exactly the same (with only a few mV of differences on analog vdd/vss). But I'm observing about 40 mV of cml Vp swing difference between i_1 and i_2 outputs.
Basically I'm simulating two different cells in 2 different testbenches, but configured with the exact same extraction view and fed with the same inputs and their output swings are quite different.
After comparing some of the exact same instances in the extraction netlist, since the extraction netlist seems the only thing that could introduce the difference, I saw an extra parameter cps=0.15 and another parasiticL=3.25 present in test_1 but not in test 2 as shown in the pic below.
So my questions are -
Are these parameters used for simulations or they just got printed out but not used?
Are these common modeling parameters or pdk related parameters that cadence forum users are not familiar with?
Can they possibly add more parasitics and reduce output swings?
What could cause the extra parameters being netlisted in test_1 but not in test_2? Which of the log files could I try to look into?
These extra parameters are the only things I can find that might have introduced such difference on output swings. If they're not the root cause, what else could be, knowing it's the same extraction view fed with the same inputs that's being simulated?
Thanks