-
Notifications
You must be signed in to change notification settings - Fork 366
[LTL to Core] Add support for overlapping and non-overlapping implications without SVA #6649
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
So I rewrote the pass to do a (fairly hacky) backwards walk from the assertion to handle all of the ltl ops. This allowed me to integrate the direct lowering (without the use of an automata conversion) of
The implementation is incredibly special cased and only supports these three cases due to the complexity of implementing a generic version of this without an even more complex automata conversion and deterministication before hand, which I have only gotten working reasonably (so without state explosion) by using an external tool called SPOT. I don't think that having such an external dependency would make sense for CIRCT, which is why I propose this more realistic implementation following a technique similar to the one implemented in symbiyosys and BSV. Let me know what you think ! |
Sorry if this is not going to be a focused review and mostly me rambling instead, but this is stretching my knowledge of both MLIR rewrites and SVA/LTL. So, I may well be writing nonsense. Let's start with an easy point: The conversion of non-overlapping implications is currently incorrect. For Now for the rambling part: As you mention yourself the implementation is fairly hacky and there is a bunch of minor stuff in need of fixing or tidying. But I think that there is still the more fundamental problem of this pass being stuck in a limbo between being a full-fledged dialect conversion and just a transformation of two very specific patterns. If I understand you correctly, you're saying that this problem in general is too complex for a simple conversion pass. So we should just commit to matching these patterns and leave everything else alone. Matching on DAG patterns is such a fundamental thing in MLIR that I feel there should be no need for you to implement the walk manually. To my surprise I cannot find an obvious example of how this should be done, which probably means I' missing something. The options I can see which would hopefully provide a more robust and legible solution:
I'm far too inexperienced to tell you which, if any, of those is the right way. I'd have to try it myself. But I'm sure someone around here knows. 😁 For the theoretical aspects of dealing with more complex SVA or LTL expressions, the best thing I can do is to ping @fzi-antonpaule . In contrast to me he actually knows about this stuff and has also done work on integrating SVA in Chisel in the past. |
Thanks for the detailed comment. I was currently aware that my As for the backwards walk, this was a solution to all of the comments that were against the forward walk (which I understand). The way I did it was super hacky for several reasons (including a severe time crunch on my side ^^"), but mostly in hopes of hearing of a better solution from more MLIR-adept people, thank you for that btw :). I don't think that including Either way, thank you for the review, I will take a deeper at this and see if I can use one of the methods you recommended. |
@fzi-hielscher I found my typo in the implementation, I confused |
Just also wanted to add a final comment about a more general LTL conversion. It's not that it's impossible to do, it's just that finding a direct lowering like the one I'm proposing here isn't something that can be done in the general case. A solution to this would be to generate a Büchi automaton from the LTL equivalent of the SVA property (if any), and then massage and lower this into a deterministic form that can be implemented in hardware. There are two reasons why I'm not taking this approach:
All of this rambling is to justify why I took this hyper-special-cased approach, and I hope that in the future, I (or someone else) will have the time (and resources) to tackle the more general approach, but that just isn't the case right now. |
I just throw in my two cents worth. Tools like SPOT and LamaConv already implement the automaton transformations you need, but without a generalized automaton dialect in CIRCT, you cannot take full advantage of the tools' capabilities. |
Thanks for your input! Thanks again for your input! |
I've done a little experimenting to simplify the matching: I have not felt adventurous enough to try PDLL though. As impressive as it is conceptually, it seems to get very little use. DRR would be nice to use but it won't work, at least not for the non-overlapping case. We need to match on the I had more luck with the struct I1ValueMatcher {
Value *what;
I1ValueMatcher(Value *what) : what(what) {}
bool match(Value op) const {
if (!op.getType().isSignlessInteger(1))
return false;
*what = op;
return true;
}
};
static inline I1ValueMatcher m_Bool(Value* const val) { return I1ValueMatcher(val); } Secondly, template <typename OpType, typename... OperandMatchers>
struct BindingRecursivePatternMatcher: mlir::detail::RecursivePatternMatcher<OpType, OperandMatchers...> {
using BaseMatcher = mlir::detail::RecursivePatternMatcher<OpType, OperandMatchers...>;
BindingRecursivePatternMatcher(OpType *_op, OperandMatchers... matchers)
: BaseMatcher(matchers...), opBind(_op) {}
bool match(Operation *op) {
if (BaseMatcher::match(op)) {
*opBind = llvm::cast<OpType>(op);
return true;
}
return false;
}
OpType *opBind;
};
template <typename OpType, typename... Matchers>
static inline auto m_OpWithBind(OpType *op, Matchers... matchers) {
return BindingRecursivePatternMatcher<OpType, Matchers...>(op, matchers...);
} This may be evil, but I think it's okay. It should then allow us to boil the actual matching on the AssertOp's argument down to (for the non-overlapping case): ...
Value ltlClock, disableCond, antecedent, consequent;
ltl::ClockOp clockOp;
ltl::DelayOp delayOp;
bool matched = matchPattern(op.getProperty(),
m_OpWithBind<ltl::ClockOp>(&clockOp,
m_Op<ltl::DisableOp>(
m_Op<ltl::ImplicationOp>(
m_Op<ltl::ConcatOp>(
m_Bool(&antecedent),
m_OpWithBind<ltl::DelayOp>(&delayOp,
m_One())),
m_Bool(&consequent)),
m_Bool(&disableCond)),
m_Bool(<lClock)));
if (!matched || delayOp.getLength() != 0)
return failure();
... It should also give easy access to all the values and attributes required for the rewrite, except for the reset...
You're right. Thinking about it again, it should not be part of the pattern. My first point was actually incorrect. Since |
Thanks a lot for exploring betters ways to write this backwards walk, I think I can definitely use that to make things a whole lot more readable! As for the use of the reset, I see what you mean, you are right that in the particular case where the antecedent is an input, then this implicit tying to the reset doesn't make as much sense. This isn't clearly defined anywhere, so I basically need to make some decision about implementing the more subtle parts, maybe this would require a bit more discussion first. I could check the defining op of the antecedent and use it's reset as the delay register's reset in the case where it's a resetable register, and then just not tie our generated registers to any reset if the antecedent doesn't have one. What do you think? |
I updated the pass to use the |
It has definitely become more readable now. 👍 It is also making it fairly obvious that there is a hierarchy on your patterns: Generic AssertProperty > OI > NOI. So, you should not be matching on the most specialized case (NOI) first. Start with the generic case. If that one does not match there is no point in continuing. If it does match, you don't need to match the outer operations again. That way you can split up the rather monstrous NOI pattern. Since you are not actually doing any type conversions anymore, I would consider ditching the dialect conversion framework in favor of the simpler greedy rewriter. That should trim down the code some more. Finally, when you do the cleanup, you can't just erase the operations. You need to check that they have not other uses first. I think after that, the pass should be in a decent general shape and we can start with the pedantic stuff. 😁 |
This is being reworked and will be part of a few separate PRs, so I'm going to close this one. |
Hi,
The goal of this PR is to introduce a lowering of the operations in the
ltl
andverif
dialects to the core dialects as to allow for the use of theAssertProperty
construct on boolean conditions in CIRCT's frontends. The reason we want this is thatAssertProperty
generates a disable operation conditioned on the reset, which means that the assertion is only valid if the reset has been raised at some point in the past.This solves one of the shortcomings of the current
btor2
backend, which was that there was no way to give registers initial values purely for verification.AssertProperty
solves that by disabling the assertion until the reset has been raised, thus disallowing the model checker to simply assign whatever value to the register at cycle 0.The logic behind this is as follows:
AssertProperty
generates the following operations:verif.has_been_reset
ltl.disable
ltl.clock
verif.assert
This can then be translated to:
verif.has_been_reset
-->seq.compreg + or with reset
ltl.disable cond if input
-->~cond -> input
-->cond || input
ltl.clock posedge %clock
-->sv.always posedge %clock {...}
verif.assert
-->sv.assert disabled_cond
For more details, I invite you to take a look at the
LTLToCore.cpp
conversion patterns. Note thatltl.clock
andverif.assert
are converted in the same pattern as they are interdependent.This PR also introduces a tiny bugfix for the btor2 pass, where the reset value of a
compreg
was generated out of order for some reason.This PR is the first part of a larger lowering from LTL to Core done through the FSM dialect, and is required for the rest of it to work.
Let me know what you think !