Quantcast
Channel: Cadence Custom IC Design Forum
Viewing all articles
Browse latest Browse all 4886

Delay Degradation vs Glitch Peak Criteria for Constraint Measurement in Cadence Liberate

$
0
0

Hi,

This question is related to the constraint measurement criteria used by the Liberate inside view. I am trying to characterize a specific D flip-flop for low voltage operation (0.6V) using Cadence Liberate (V16). 

When the "define_arcs" are not explicitly specified in the settings for the circuit (but the input/outputs are indeed correct in define_cell), the inside view seems to probe an internal node (i.e. master latch output)  for constraint measurements instead of the Q output of the flip flop. So to force the tool to probe Q output I added following coder in constraint arcs :

# constraint arcs from CK => D
define_arc \
-type hold \
-vector {RRx} \
-related_pin CP \
-pin D \
-probe Q \
DFFXXX

define_arc \
-type hold \
-vector {RFx} \
-related_pin CP \
-pin D \
-probe Q \
DFFXXX

define_arc \
-type setup \
-vector {RRx} \
-related_pin CP \
-pin D \
-probe Q \
DFFXXX

define_arc \
-type setup \
-vector {RFx} \
-related_pin CP \
-pin D \
-probe Q \
DFFXXX

with -probe Q liberate identifies Q as the output, but uses Glitch-Peak criteria instead of delay degradation method. So what could be the exact reason for this unintended behavior ? In my external (spectre) spice simulation, the Flip-Flop works well and it does not show any issues in the output delay degradation when the input sweeps.

Thanks

Anuradha


Viewing all articles
Browse latest Browse all 4886

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>