-
-
Notifications
You must be signed in to change notification settings - Fork 363
Sigma pool closed, Extra payload extended #1477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis update introduces new consensus parameters and features related to Spark Names and the Sigma protocol. It adds the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant RPC
participant SparkNameManager
participant Blockchain
User->>RPC: getsparknames()
RPC->>Blockchain: Check if Spark is enabled
alt Spark enabled
RPC->>SparkNameManager: Fetch all Spark names
SparkNameManager-->>RPC: List of names & addresses
RPC-->>User: JSON array of Spark names
else Spark not enabled
RPC-->>User: Error (Spark not activated)
end
User->>RPC: getsparknamedata(sparkname)
RPC->>Blockchain: Check if Spark is enabled
alt Spark enabled
RPC->>SparkNameManager: Fetch data for sparkname
SparkNameManager-->>RPC: Address, height, data
RPC-->>User: JSON object with sparkname data
else Spark not enabled
RPC-->>User: Error (Spark not activated)
end
Suggested reviewers
Poem
Tip ⚡💬 Agentic Chat (Pro Plan, General Availability)
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (5)
💤 Files with no reviewable changes (3)
✅ Files skipped from review due to trivial changes (2)
⏰ Context from checks skipped due to timeout of 90000ms (2)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (9)
- src/chainparams.cpp (6 hunks)
- src/consensus/consensus.h (1 hunks)
- src/consensus/params.h (2 hunks)
- src/lelantus.cpp (1 hunks)
- src/miner.cpp (2 hunks)
- src/test/firsthalving_tests.cpp (2 hunks)
- src/test/main_tests.cpp (2 hunks)
- src/validation.cpp (6 hunks)
- src/wallet/lelantusjoinsplitbuilder.cpp (1 hunks)
Additional comments not posted (27)
src/test/main_tests.cpp (4)
24-24
: LGTM!The code changes are approved.
26-26
: LGTM!The code changes are approved.
27-28
: LGTM!The code changes are approved.
29-30
: LGTM!The code changes are approved.
src/consensus/consensus.h (1)
39-39
: Approve the addition ofNEW_MAX_TX_EXTRA_PAYLOAD
constant.The addition of the new constant is approved as it does not modify any existing code.
Verify the usage and impact of
NEW_MAX_TX_EXTRA_PAYLOAD
in the codebase.Please ensure that:
- The constant is used correctly in all relevant parts of the codebase.
- The increased payload size does not have any unintended consequences on transaction processing, storage, or network performance.
- Proper validation is in place to enforce the new payload size limit.
Run the following script to verify the constant usage:
Verification successful
Usage of
NEW_MAX_TX_EXTRA_PAYLOAD
confirmed in transaction validation.The
NEW_MAX_TX_EXTRA_PAYLOAD
constant is used insrc/validation.cpp
to enforce payload size limits in transactions. This confirms its role in the validation logic, ensuring that the new limit is applied after a specific block height.
- Location of usage:
src/validation.cpp
: The constant is used to validate transaction payload sizes.Please ensure that this logic is thoroughly tested to prevent any unintended consequences on transaction processing. If further verification is needed, consider reviewing test coverage for this validation path.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage and impact of `NEW_MAX_TX_EXTRA_PAYLOAD` in the codebase. # Test 1: Search for the constant usage. Expect: At least one occurrence in transaction handling code. rg --type c++ -A 5 $'NEW_MAX_TX_EXTRA_PAYLOAD' # Test 2: Search for payload size validation. Expect: Validation against the new limit. rg --type c++ -A 5 $'(tx|transaction).*extra.*payload.*size'Length of output: 163
Script:
#!/bin/bash # Description: Verify the usage and impact of `NEW_MAX_TX_EXTRA_PAYLOAD` in the codebase. # Test 1: Search for the constant usage. Expect: At least one occurrence in transaction handling code. rg -A 5 'NEW_MAX_TX_EXTRA_PAYLOAD' # Test 2: Search for payload size validation. Expect: Validation against the new limit. rg -A 5 '(tx|transaction).*extra.*payload.*size'Length of output: 986
src/test/firsthalving_tests.cpp (3)
Line range hint
115-267
: LGTM!The test case provides thorough coverage for verifying the subsidy halving mechanism and developer payouts. The test case logic looks good.
Line range hint
269-309
: LGTM!The test case provides additional coverage for verifying the consensus rules related to developer payouts. The test case logic looks good.
Line range hint
115-115
: Verify the impact of removingnSubsidyHalvingStopBlock
on the halving mechanism.Ensure that the removal of this parameter does not introduce any unintended consequences or bugs in the subsidy halving logic.
Run the following script to verify the halving mechanism:
Verification successful
Subsidy Halving Mechanism Verified
The removal of
nSubsidyHalvingStopBlock
does not negatively impact the subsidy halving mechanism. The logic is now governed bynSubsidyHalvingFirst
,nSubsidyHalvingSecond
, andnSubsidyHalvingInterval
, and is thoroughly tested in the codebase.
- Functions like
GetBlockSubsidyWithMTPFlag
andGetBlockSubsidy
implement the subsidy calculation logic.- Test cases in
firsthalving_tests.cpp
and other files verify the correct application of the halving logic.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the impact of removing `nSubsidyHalvingStopBlock` on the halving mechanism. # Test: Search for potential issues related to the removal of `nSubsidyHalvingStopBlock`. # Expect: No concerning usages or potential bugs. rg --type cpp -i 'nSubsidyHalvingStopBlock' -A 5Length of output: 48
Script:
#!/bin/bash # Description: Search for functions or logic related to subsidy calculations and halving intervals. # Search for subsidy calculation functions or logic in the codebase. rg --type cpp -i 'subsidy' -A 10 # Search for halving interval logic in the codebase. rg --type cpp -i 'halving' -A 10Length of output: 80171
src/consensus/params.h (2)
164-172
: New consensus parameters for stage 4 coinbase payment distribution.The new parameters
stage4StartBlock
,stage4DevelopmentFundShare
,stage4CommunityFundShare
, andstage4MasternodeShare
collectively define the coinbase payment distribution for stage 4.Please verify the following:
- The
stage4StartBlock
is set to the intended block height for the start of stage 4.- The fund share percentages (
stage4DevelopmentFundShare
,stage4CommunityFundShare
,stage4MasternodeShare
) add up to 100%.- The corresponding fund addresses for stage 4 are correctly set in the codebase.
174-175
: Introduction of tail emission after stage 4.The new parameter
tailEmissionBlockSubsidy
indicates that there will be a perpetual block subsidy even after stage 4, creating a tail emission of coins.Introducing a tail emission is a significant change to the coin emission schedule. Please verify the following:
- The
tailEmissionBlockSubsidy
amount is carefully chosen, taking into account the long-term supply and economic implications of the tail emission.- The block reward calculation logic properly handles the activation of the tail emission based on the current block height.
src/wallet/lelantusjoinsplitbuilder.cpp (1)
196-198
: LGTM!The added guard clause enhances the robustness of the
Build
method by preventing further operations related to the Sigma pool if it has already been closed at the specified block height (stage4StartBlock
). This ensures that the method does not proceed under invalid conditions.src/miner.cpp (1)
838-863
: Significant changes to introduce stage 4 funding period. LGTM!The key changes are:
- Introduction of stage 4 funding period starting at block height
params.stage4StartBlock
.- Removal of
nSubsidyHalvingStopBlock
parameter.- New parameters
stage4DevelopmentFundShare
andstage4CommunityFundShare
to define the payout percentages for stage 4.- Updated logic to calculate the development and community fund payouts based on the current stage (3 or 4).
Please verify the following:
- The payout addresses
params.stage3DevelopmentFundAddress
andparams.stage3CommunityFundAddress
are correct for stage 3.- The payout percentages
params.stage3DevelopmentFundShare
,params.stage3CommunityFundShare
,params.stage4DevelopmentFundShare
, andparams.stage4CommunityFundShare
are as per the consensus.- The payouts are correctly deducted from the block subsidy calculated using
GetBlockSubsidyWithMTPFlag()
.src/chainparams.cpp (7)
208-213
: Changes look good!The changes introduce the new stage 4 funding parameters and remove the
nSubsidyHalvingStopBlock
parameter as described in the summary. The values assigned to the new parameters also look correct.
535-540
: LGTM!The stage 4 funding parameter changes for the test network look good and are consistent with the changes made to the main network parameters.
802-803
: Please confirm the intention behind the aggressive halving schedule changes for the dev network.The
nSubsidyHalvingSecond
andnSubsidyHalvingInterval
values have been significantly reduced, which will cause the block subsidy to halve much faster than before on the dev network.Please confirm this is intentional for testing/experimental purposes and won't have any unintended side-effects.
817-822
: Stage 4 funding parameter additions look good.The new stage 4 funding parameters have been added consistently to the dev network parameters, similar to the main and test networks. The values assigned also look suitable for the dev network.
1056-1056
: The stage 3 start block change for regtest looks fine.Delaying the start of stage 3 to block 2000 on regtest is a reasonable change for testing purposes and shouldn't cause any issues.
1063-1068
: The stage 4 parameter values for regtest look acceptable.Although the values differ from the other networks, with a higher development fund share and lower masternode share, these should be okay for regression testing purposes. The tail emission block subsidy matches the other networks.
Line range hint
1275-1275
: The updated regtest genesis block parameters look good!The changes to the genesis block parameters for regtest:
- nTime set to
ZC_GENESIS_BLOCK_TIME
- nNonce updated to a new value
- nBits set to the PoW limit of 0x207fffff
- nVersion changed to 1
- genesisReward set to 0
appear intentional and reasonable for testing purposes. Using the PoW limit for nBits and 0 block reward are common practices for regtest genesis blocks.
src/lelantus.cpp (1)
424-429
: LGTM!The new conditional check correctly prevents processing Sigma-to-Lelantus joinsplit transactions after the
stage4StartBlock
height, effectively closing the Sigma pool as intended.src/validation.cpp (7)
Line range hint
665-670
: Allows larger transaction payload sizes after a protocol upgrade height. LGTM.The new check allows
MAX_TX_EXTRA_PAYLOAD
bytes of extra payload data before::Params().GetConsensus().stage4StartBlock
height, andNEW_MAX_TX_EXTRA_PAYLOAD
bytes after that.This provides a clean upgrade path for increasing the supported extra payload size at a later height. The check itself looks good.
1926-1938
: Introduces a new block subsidy schedule with a tail emission. Looks good!The updated
GetBlockSubsidyWithMTPFlag()
function defines a new block subsidy schedule:
- The original BTC-like halving schedule is used up to
nSubsidyHalvingSecond
height- A 3rd "stage" cuts the subsidy in half once more
- Finally, a 4th "stage" introduces a fixed "tail emission" subsidy of
consensusParams.tailEmissionBlockSubsidy
This provides a smooth transition to a supply curve with no hard cap, while still preserving the original halving behavior for a long time.
The staging logic using block height ranges looks good. The tail emission avoids some economic issues with a hard cap.
1957-1961
: Simplifies masternode payments after stage 4 start to use a fixed percentage. LGTM.The
GetMasternodePayment()
function is updated to use a fixed percentage of the block reward for masternode payments, starting at thestage4StartBlock
height.This removes the time-based conditions in the old logic, making the payments simpler and more predictable. The change looks good:
blockValue*params.stage4MasternodeShare/100
is used after the stage 4 start height- The old logic is kept for earlier blocks for backwards compatibility
This is a nice simplification of the masternode payment logic.
Line range hint
4840-4864
: Implements new consensus rules to enforce development and community fund payouts. Looks good!The new contextual checks verify the coinbase transaction contains outputs paying the expected amounts to the development and community funds.
Two payout schemes are enforced:
- After
stage3StartTime
:
- Both dev and community fund payouts are required
- The payout amounts are calculated using the block subsidy and stage-specific share percentages (
stage3DevelopmentFundShare
,stage4DevelopmentFundShare
, etc)- The stage 3 percentages are used up to
stage4StartBlock
height, after which stage 4 percentages apply- Between
nSubsidyHalvingFirst
andstage3StartTime
:
- Only the dev fund payout is required, using the
stage2DevelopmentFundShare
percentageThe payout amounts look to be calculated correctly for each stage, and the logic to scan the coinbase outputs for the expected payments is implemented properly.
Overall this is a good addition to enforce the dev/community fund payouts!
4866-4875
: Enforces "stage 2" development fund payouts between the first halving and stage 3 start. Looks good.This code block implements the payout checks for the "stage 2" development fund, which is in effect between the
nSubsidyHalvingFirst
andstage3StartTime
heights.During stage 2, a portion of the block subsidy on each block is expected to be paid to the dev fund. The expected payout amount is calculated as:
devPayoutValue = (GetBlockSubsidy() * consensusParams.stage2DevelopmentFundShare) / 100
The code then verifies the coinbase transaction contains an output paying
devPayoutValue
to thestage2DevelopmentFundAddress
. If the output is not found, an error is returned.The payout calculation and output check look correct for stage 2. This properly enforces the expected dev fund payments during this period.
4847-4852
: Calculates the expected development and community fund payouts for stage 3 and later. Looks good.This code calculates the expected dev and community fund payout amounts for blocks after the
stage3StartTime
.The payout amounts are calculated differently depending on whether the block is in stage 3 or stage 4 and later:
- For stage 3 (block height <
stage4StartBlock
), thestage3DevelopmentFundShare
andstage3CommunityFundShare
percentages are used- For stage 4+ (block height >=
stage4StartBlock
), thestage4DevelopmentFundShare
andstage4CommunityFundShare
percentages are usedThe code selects the appropriate percentages based on the block height, then calculates the payout amounts as:
devPayoutValue = (GetBlockSubsidy() * devShare) / 100 communityPayoutValue = (GetBlockSubsidy() * communityShare) / 100
The subsidy and percentage values are all pulled from the consensus parameters.
This logic looks correct for calculating the expected payouts based on the block's stage. Using different share percentages for each stage allows for flexibility in the payout scheme.
Line range hint
4853-4863
: Verifies the coinbase transaction contains the expected dev and community fund payouts. Looks good.This code scans the outputs of the coinbase transaction to verify the block pays the expected amounts to the development and community funds.
For each coinbase output, it checks if the output script matches either:
- The dev fund script (
devPayoutScript
) and the output amount equals the expected dev payout (devPayoutValue
), or- The community fund script (
communityPayoutScript
) and the amount equals the expected community payout (communityPayoutValue
)Boolean flags
devFound
andcommunityFound
are used to track whether a valid output was found for each fund.After scanning all outputs, the code checks that both
devFound
andcommunityFound
are true. If either flag is still false, it means the corresponding payout was missing from the coinbase outputs.In that case, an error is returned with a clear message indicating the block is invalid due to a missing dev or community payout.
This logic looks correct for enforcing the presence of the
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (1)
- src/chainparams.cpp (6 hunks)
Files skipped from review as they are similar to previous changes (1)
- src/chainparams.cpp
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (34)
src/qt/sendcoinsdialog.cpp (1)
311-322
: Great addition of Spark name resolution functionality.This feature allows users to send funds to human-readable Spark names (e.g., "@username") rather than requiring them to enter full addresses. The implementation includes proper error handling and clear user feedback.
Consider adding real-time validation of Spark names in the SendCoinsEntry class so users get immediate feedback on whether a Spark name exists, rather than only finding out after clicking the send button.
src/libspark/ownership_proof.h (1)
1-31
: New ownership proof class implementation looks good, but lacks documentationThe implementation of the
OwnershipProof
class is straightforward and structurally sound, with proper serialization support. However, the code would benefit from documentation explaining the purpose of the class and its member variables, especially since this relates to cryptographic proofs where clarity is important.Consider adding documentation and making member variables private with accessor methods if needed:
namespace spark { class OwnershipProof{ + // Class for creating and verifying ownership proofs in the Spark protocol public: static constexpr std::size_t memoryRequired() { return Scalar::memoryRequired() * 3 + GroupElement::memoryRequired(); } ADD_SERIALIZE_METHODS; template <typename Stream, typename Operation> inline void SerializationOp(Stream& s, Operation ser_action) { READWRITE(A); READWRITE(t1); READWRITE(t2); READWRITE(t3); } -public: +private: GroupElement A; Scalar t1; Scalar t2; Scalar t3; +public: + // Getters for the private member variables + const GroupElement& GetA() const { return A; } + const Scalar& GetT1() const { return t1; } + const Scalar& GetT2() const { return t2; } + const Scalar& GetT3() const { return t3; } + + // Setters if needed + void SetA(const GroupElement& value) { A = value; } + void SetT1(const Scalar& value) { t1 = value; } + void SetT2(const Scalar& value) { t2 = value; } + void SetT3(const Scalar& value) { t3 = value; } };src/wallet/lelantusjoinsplitbuilder.cpp (1)
195-216
: Add a comment explaining the Sigma protocol deprecationWhile the implementation is correct, it would be helpful to add a comment explaining the purpose of this conditional check for future maintainers.
+ // Only allow spending from Sigma coins if we haven't reached the Sigma end block + // After this block, Sigma protocol is deprecated and these coins can no longer be spent if (chainActive.Height() < Params().GetConsensus().nSigmaEndBlock) { try { CAmount availableBalance(0);src/qt/addressbookpage.cpp (6)
16-16
: Include organization could be improvedThe new include for "createsparknamepage.h" is added in between existing includes, which breaks the standard pattern. Consider grouping includes by type (system, project, local) and sorting alphabetically within groups for better maintainability.
215-217
: Add a comment explaining why editing is disallowed for Spark namesThe code prevents editing for Spark names by returning early, but there's no explanation for why this restriction exists. Consider adding a brief comment to clarify the reasoning.
277-277
: Add a comment explaining why deletion is disallowed for Spark namesThe code prevents deletion of Spark names, but there's no explanation for why this restriction exists. Consider adding a brief comment to clarify the reasoning.
299-300
: Use a constant or enum instead of string comparisonString comparison with
ui->addressType->currentText() == AddressTableModel::SparkName
is fragile and could break if text changes or is translated. Consider using an enum or constant for type checking.- bool fSparkNames = ui->addressType->currentText() == AddressTableModel::SparkName; + bool fSparkNames = ui->addressType->itemData(ui->addressType->currentIndex()).toInt() == SparkName;
413-417
: Replace hardcoded index with a more robust checkUsing a hardcoded index (
idx == 2
) is brittle and could break if the order of items in the dropdown changes. Replace with a check based on the data value.- if (idx == 2) { + if (ui->addressType->itemData(idx).toInt() == SparkName) {
436-441
: Replace string contains() checks with more robust type checkingThe filter logic uses string contains() checks which are fragile if text changes or is translated. Consider using enum values or constants for more robust filtering.
- if (dataStr.contains("spark name")) - return typeFilter == 2; - else if (dataStr.contains("spark")) - return typeFilter == 0; - else if (dataStr.contains("transparent")) - return typeFilter == 1; + if (dataStr.contains("spark name")) + return typeFilter == SparkName; + else if (dataStr.contains("spark")) + return typeFilter == Spark; + else if (dataStr.contains("transparent")) + return typeFilter == Transparent;src/libspark/test/ownership_test.cpp (3)
1-3
: Configure static analysis for Boost Test macros.
Cppcheck may flagBOOST_FIXTURE_TEST_SUITE
as an unknown macro. Configure the analyzer or suppress this warning to avoid false positives.
7-40
: Excellent coverage of serialization.
The test verifies that the proof serializes and deserializes without loss of integrity. Consider testing partial/truncated streams to further validate error handling.🧰 Tools
🪛 Cppcheck (2.10-2)
[error] 7-7: There is an unknown macro here somewhere. Configuration is required. If BOOST_FIXTURE_TEST_SUITE is a macro then please configure it.
(unknownMacro)
71-109
: Robust negative testing for bad proofs.
Randomizing each proof component independently is a strong approach to ensure detection of tampering. You might also test altering the messagem
for completeness.src/qt/walletmodel.h (3)
203-203
: Clarify validation responsibilities.
validateSparkNameData(...)
is a convenience wrapper aroundCSparkNameManager::ValidateSparkNameData
. Indicate in docs or code comments that any further necessary checks (like user authorization) belong here if relevant.
205-205
: Document usage of special addresses for spark name fees.
initSparkNameTransaction(...)
presumably uses a specific dev fund address or special address. A short doc comment would clarify that it’s a standard approach or a special flow for spark name fees.
207-207
: Enforce consistent spark name lookups.
getSparkNameAddress(...)
returns an empty string if the name is not found. Consider logging or surfacing a user-friendly error if the name was expected to exist.src/qt/createsparknamepage.h (2)
23-23
: Clarify the usage offeeText
.
Though it stores the text representation of the fee, consider clarifying how and when it's updated or displayed.
41-45
: Add method-level comments for user-facing slots.
Methods likeon_generateButton_clicked()
oron_sparkNameEdit_textChanged()
benefit from doxygen-style comments, enhancing maintainability and clarity.src/qt/addresstablemodel.cpp (2)
27-27
: Unify naming for SparkName constants.
Currently,SparkName
is assigned"Spark names"
, while the UI displays"spark name"
. Consider standardizing the label for consistency.
38-43
: Remove or document the commented-out enum.
If these lines are obsolete, consider removing them to reduce clutter; if they're reserved for future use, add explanatory comments.src/spark/sparkwallet.h (2)
130-144
: Prefer modern pointer initialization.
Usenullptr
rather thanNULL
for default pointer arguments to align with modern C++ best practices.
146-150
: Clarify SparkName fee parameters.
Add doxygen comments clarifying howsparkNamefee
andtxFee
interrelate and are calculated, to improve maintainability.src/spark/state.cpp (1)
805-822
: Improve conflict feedback in spark name checks.
WhenIsInConflict
returns false, this code simply returnsfalse
. Consider logging an explicit error or returning a more descriptive error code so developers can easily diagnose potential conflicts.+ LogPrintf("CheckSparkTransaction: Spark name conflict detected: %s\n", sparkTxData.name.c_str()); return false;
🧰 Tools
🪛 Cppcheck (2.10-2)
[error] 811-811: Syntax Error
(internalAstError)
src/spark/sparkwallet.cpp (3)
271-290
: NewisAddressMine(const spark::Address&)
logic.
Logic looks correct, checking the stored addresses and fallback using the diversifier. Consider ensuring the diversifier does not overflow anint32_t
if large.
1748-1748
: Magic numbers in fee/size estimation.
Consider extracting constants (924, 1803, 322, 34) to named static constants or config variables for clarity.- size = 924 + 1803*(spendCoins.size()) + 322*(mintNum+1) + 34*utxoNum + additionalTxSize; + static constexpr size_t SPARK_BASE_SIZE = 924; + static constexpr size_t SPARK_SPEND_FACTOR = 1803; + static constexpr size_t SPARK_MINT_FACTOR = 322; + static constexpr size_t SPARK_UTXO_FACTOR = 34; + size = SPARK_BASE_SIZE + + SPARK_SPEND_FACTOR * spendCoins.size() + + SPARK_MINT_FACTOR * (mintNum + 1) + + SPARK_UTXO_FACTOR * utxoNum + + additionalTxSize;
1596-1633
: Implementation ofCreateSparkNameTransaction
.
This function correctly generates a developer payout, verifies the spark address belongs to the wallet, and appends spark name transaction data. Confirm the dev payout address is accurate in all environments (testnet, mainnet, etc.).Would you like a reference test scenario to ensure the developer address is correct across various networks?
src/qt/walletmodel.cpp (2)
1326-1340
: Consider adding error handling for address generation failures.Currently,
wallet->sparkWallet->generateNewAddress()
andSetSparkAddressBook()
are called without checking for potential errors (e.g., wallet locked). Adding basic error handling or logging could improve reliability.
1562-1578
: Clarify the hard-coded validity block value.Here,
sparkNameData.sparkNameValidityBlocks
is set to1000
with a "// doesn't matter" comment. Consider clarifying or making this configurable to avoid confusion about its purpose.src/test/sparkname_tests.cpp (2)
48-59
: Address generation duplicates wallet code.
GenerateSparkAddress()
mirrors logic in the productionWalletModel::generateSparkAddress()
. If feasible, consider reusing or abstracting common logic to keep test and production code in sync.
61-82
: Lack of explicit error reporting on transaction commit failures.When committing the transaction in
CreateSparkNameTx(...)
, if it fails, it only updateslastState
. Consider additional checks or logs to make test failures more noticeable.src/sparkname.h (1)
90-171
: Central manager for Spark name functionality is well-defined.
CSparkNameManager
offers conflict checks, name validation, and transaction data parsing. Consider adding provisions for concurrency control if external callers invoke these methods in parallel, since SparkName additions/removals rely on internal maps.src/sparkname.cpp (4)
12-12
: Consider managing the global singleton’s lifecycle more explicitly.
While initializingCSparkNameManager::sharedSparkNameManager
with a static allocation may be convenient, it can cause complications in testing or controlled shutdown scenarios. You might want to control the creation and destruction of this singleton more explicitly to avoid potential resource or ordering issues at program shutdown.
125-233
: Consider breaking this large function into smaller utility functions.
TheCheckSparkNameTx
method covers parsing, validations, fee checks, ownership proof handling, and more. Splitting it into smaller subroutines (e.g., one for payload/ownership checks, one for name validity, one for fee checks) would improve readability, maintainability, and testability.
361-376
: Ensure thread-safety when mutating spark name structures.
RemoveSparkNamesLosingValidity
directly erases items fromsparkNames
andsparkNameAddresses
. It’s unclear if these data structures can be accessed concurrently from other threads (like block-processing or mempool validation). If so, consider locking them or otherwise ensuring thread safety.
378-388
: Validate potential locale constraints for character checks.
TheIsSparkNameValid
method relies onisalnum(c)
, which can behave differently under certain locale settings. If cross-locale consistency matters (e.g., only supporting ASCII-based character sets), consider using an explicit check for ASCII ranges rather thanisalnum
.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (49)
contrib/bitcoin-qt.pro
(1 hunks)src/Makefile.am
(3 hunks)src/Makefile.qt.include
(4 hunks)src/Makefile.test.include
(2 hunks)src/chain.h
(4 hunks)src/chainparams.cpp
(9 hunks)src/consensus/params.h
(1 hunks)src/firo_params.h
(1 hunks)src/lelantus.cpp
(1 hunks)src/libspark/keys.cpp
(2 hunks)src/libspark/keys.h
(2 hunks)src/libspark/ownership_proof.h
(1 hunks)src/libspark/test/ownership_test.cpp
(1 hunks)src/libspark/util.h
(1 hunks)src/primitives/transaction.h
(1 hunks)src/qt/addressbookpage.cpp
(9 hunks)src/qt/addressbookpage.h
(2 hunks)src/qt/addresstablemodel.cpp
(9 hunks)src/qt/addresstablemodel.h
(2 hunks)src/qt/bitcoinaddressvalidator.cpp
(2 hunks)src/qt/createsparknamepage.cpp
(1 hunks)src/qt/createsparknamepage.h
(1 hunks)src/qt/forms/createsparkname.ui
(1 hunks)src/qt/forms/receivecoinsdialog.ui
(1 hunks)src/qt/receivecoinsdialog.cpp
(6 hunks)src/qt/receivecoinsdialog.h
(1 hunks)src/qt/sendcoinsdialog.cpp
(1 hunks)src/qt/sendcoinsentry.cpp
(1 hunks)src/qt/walletmodel.cpp
(2 hunks)src/qt/walletmodel.h
(2 hunks)src/rpc/blockchain.cpp
(3 hunks)src/rpc/client.cpp
(1 hunks)src/secp256k1/src/cpp/Scalar.cpp
(6 hunks)src/spark/sparkwallet.cpp
(7 hunks)src/spark/sparkwallet.h
(3 hunks)src/spark/state.cpp
(7 hunks)src/spark/state.h
(2 hunks)src/sparkname.cpp
(1 hunks)src/sparkname.h
(1 hunks)src/test/sparkname_tests.cpp
(1 hunks)src/txdb.cpp
(1 hunks)src/txmempool.cpp
(1 hunks)src/txmempool.h
(1 hunks)src/ui_interface.h
(2 hunks)src/validation.cpp
(6 hunks)src/wallet/lelantusjoinsplitbuilder.cpp
(1 hunks)src/wallet/rpcwallet.cpp
(3 hunks)src/wallet/wallet.cpp
(2 hunks)src/wallet/wallet.h
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- src/qt/forms/createsparkname.ui
🧰 Additional context used
🪛 Cppcheck (2.10-2)
src/test/sparkname_tests.cpp
[error] 128-128: There is an unknown macro here somewhere. Configuration is required. If BOOST_FIXTURE_TEST_SUITE is a macro then please configure it.
(unknownMacro)
src/libspark/test/ownership_test.cpp
[error] 7-7: There is an unknown macro here somewhere. Configuration is required. If BOOST_FIXTURE_TEST_SUITE is a macro then please configure it.
(unknownMacro)
src/spark/state.cpp
[error] 811-811: Syntax Error
(internalAstError)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Create Source Distribution
🔇 Additional comments (101)
contrib/bitcoin-qt.pro (1)
22-22
: Correctly added UI form for Spark Name creation.The new form
createsparkname.ui
is correctly added to the Qt project forms list, which is necessary for the new Spark Name feature.src/qt/receivecoinsdialog.h (1)
87-87
: Added private slot for Spark Name creation.The
createSparkName()
slot is correctly added, which will be connected to the new Spark Name button in the UI. This slot creates and displays the appropriate dialog for Spark Name creation.src/qt/forms/receivecoinsdialog.ui (1)
208-226
: Well-structured UI button for Spark Name creation.The "Create Spark name" button is properly added to the UI with appropriate properties, including size policy, tooltip, and text. This button will enable users to initiate the Spark Name creation process.
One thing to note: ensure that the button is connected to the
createSparkName()
slot in the dialog implementation file, and that its visibility is properly controlled based on the selected address type (it should only be visible when "Spark" is selected).src/libspark/util.h (1)
33-33
: Added ownership transcript label constant.The new constant
LABEL_TRANSCRIPT_OWNERSHIP
follows the established naming pattern and will be used for ownership verification in the Spark protocol.src/rpc/client.cpp (1)
204-205
: New RPC command for Spark name registration added.This change adds a parameter conversion rule for the new
registersparkname
RPC command. Parameter at index 2 will be converted from JSON format to appropriate internal format when the command is called.src/txmempool.h (1)
527-528
:❓ Verification inconclusive
New mechanism to prevent duplicate Spark name registrations.
The added map structure stores Spark names to prevent duplicates in the mempool. Each name is stored with an associated pair containing a string (likely owner information) and a transaction hash.
However, ensure that this structure is properly managed in the
removeUnchecked
method to clean up entries when transactions are removed from the mempool.
🏁 Script executed:
#!/bin/bash # Check if removeUnchecked method handles the sparkNames map rg "removeUnchecked.*sparkNames" --include "*.cpp" -A 5 -B 5Length of output: 504
Attention: Confirm Cleanup of
sparkNames
inremoveUnchecked
The new
sparkNames
map is being used to track duplicate Spark registrations along with associated owner data and transaction hashes. However, we couldn’t conclusively locate cleanup logic in theremoveUnchecked
method based on our first search attempt. Please ensure that theremoveUnchecked
method (likely insrc/txmempool.cpp
) is updated to remove the corresponding entries fromsparkNames
when a transaction is cleared from the mempool.If needed, run a manual check or use a refined search (for example, looking at the context of the
removeUnchecked
function and verifying whether it handlessparkNames
entries) to confirm that the cleanup is implemented correctly.src/Makefile.am (3)
247-247
: Added sparkname.h header to build system.This change adds the new
sparkname.h
header file to the build system, making it available for inclusion in other parts of the codebase.
428-428
: Added sparkname.cpp implementation to server sources.This change adds the
sparkname.cpp
implementation file to the server sources, ensuring it's compiled into the server library.
689-689
: Added ownership_proof.h to libspark sources.This change adds the
libspark/ownership_proof.h
header file to the libspark sources, providing ownership proof functionality that may be used by the Spark name feature.src/lelantus.cpp (1)
424-429
: New validation check prevents Sigma to Lelantus conversions after end blockThis change adds a critical validation check to prevent Sigma-to-Lelantus conversions after the Sigma pool has closed (when current height exceeds
nSigmaEndBlock
). This enforces the planned deprecation of the Sigma protocol.Don't forget to update the
nSigmaEndBlock
parameter before merging this PR, as mentioned in the PR description and comments.src/txdb.cpp (1)
432-434
: Added Spark name tracking to block indexThese additions allow the block index to track which Spark names were added or removed in each block, enabling proper handling of Spark name transactions and state management.
src/primitives/transaction.h (1)
40-40
: Added new TRANSACTION_ALIAS transaction typeThis new transaction type (value 10) supports the Spark names feature implementation, providing a distinct classification for alias-related transactions in the system.
src/txmempool.cpp (1)
628-635
:✅ Verification successful
Ensures proper cleanup of spark name references.
This new loop correctly removes all spark name transactions referencing a transaction that's being removed from the mempool. The iterator handling is done properly to avoid invalidation during deletion.
However, I don't see the declaration of the
sparkNames
member variable in the provided code segment. This appears to be a new member that should be added to theCTxMemPool
class declaration.Let's check if the
sparkNames
member is properly declared in the header file:
🏁 Script executed:
#!/bin/bash # Check for the declaration of sparkNames in the txmempool.h header file grep -A 2 -B 2 "sparkNames" src/txmempool.hLength of output: 213
Verified: Proper Declaration and Cleanup of
sparkNames
The
sparkNames
member is correctly declared insrc/txmempool.h
, and the cleanup loop insrc/txmempool.cpp
(lines 628-635) properly handles iterator invalidation during deletion.src/Makefile.test.include (2)
95-95
: Added test for ownership functionality.Good addition of the ownership test file, which will ensure the underlying functionality is properly tested.
199-200
: Added SparkName tests.The addition of the SparkName tests is essential for validating the new functionality. The line continuation character is correctly added to maintain proper Makefile syntax.
src/spark/state.h (2)
13-13
: Added inclusion of sparkname header.The header inclusion is necessary for accessing the
CSparkNameTxData
type used in the class.
32-34
:✅ Verification successful
Added member variable for storing Spark name data.
This new member variable will store Spark name information in the transaction info class, allowing the system to track and manage spark names during transaction processing.
Let's verify the CSparkNameTxData structure to understand what data is being stored:
🏁 Script executed:
#!/bin/bash # Check the definition of CSparkNameTxData grep -A 15 "class CSparkNameTxData" src/sparkname.hLength of output: 530
Spark Name Data Member Verified
The new member variable
sparkNames
insrc/spark/state.h
now successfully stores Spark name information using theCSparkNameTxData
structure. The definition ofCSparkNameTxData
confirms that it captures all necessary fields—namely, the version (nVersion
with a default ofCURRENT_VERSION
), aninputsHash
, the spark name (name
with validation for 1–20 alphanumeric/hyphen characters), the destination address (sparkAddress
), and the ownership proof (addressOwnershipProof
). This implementation properly facilitates tracking and managing Spark names during transaction processing.src/wallet/wallet.h (1)
1105-1109
: Implementation for Spark Name transaction creation.This new method extends the wallet's functionality to support the creation of Spark Name transactions. The method signature follows the established pattern of other transaction creation methods in the CWallet class.
src/ui_interface.h (2)
19-19
: Forward declaration of CSparkNameBlockIndexData.This forward declaration is needed for the new Spark Name notification signals.
125-128
: Added signals for Spark Name notifications.These signals allow the UI to be notified when Spark Names are added or removed, enabling the interface to update accordingly. The pattern follows existing notification signals in the CClientUIInterface class.
src/chain.h (4)
25-25
: Added include for sparkname.h.Including the sparkname.h header provides access to the CSparkNameBlockIndexData type used by the new member variables.
272-275
: Added Spark Name tracking to block index.These maps track Spark Names that were created, extended, or removed in each block. The design uses a clear mapping from name to its data, which includes address, expiration height, and additional info.
319-320
: Updated SetNull to clear Spark Name maps.Properly clears the new Spark Name maps when resetting the block index, ensuring consistent initialization.
596-599
: Added serialization for Spark Name data.The serialization logic correctly handles the new Spark Name data fields, ensuring they're only serialized for blocks after the Spark Names feature activation height (nSparkNamesStartBlock). This approach maintains backward compatibility.
src/qt/addressbookpage.h (2)
48-49
: Added SparkName to AddressTypeEnum.This enum addition allows the address book to distinguish and handle Spark Names as a distinct address type.
65-65
: Added platformStyle member variable.The platformStyle pointer will help maintain consistent styling in the address book page, following the pattern used in other UI components.
src/consensus/params.h (2)
267-268
: Properly added Spark Names consensus parameters.These new parameters follow the established pattern for feature activation in the codebase. The array size of 21 for fees suggests a comprehensive fee structure for Spark Names.
Remember to update the
nSparkNamesStartBlock
value with the appropriate hard fork block number before merging, as mentioned in the PR comments by psolstice and reubenyap.
272-272
: New parameter for ending Sigma feature.This parameter will allow setting a specific block height when the Sigma feature will end, likely as part of transitioning to Spark for privacy functionality.
src/secp256k1/src/cpp/Scalar.cpp (1)
217-217
: Improved exception handling with standard C++ exceptions.Replacing string exceptions with
std::runtime_error
follows modern C++ best practices, making the code more maintainable and consistent with standard exception handling patterns. The error messages remain the same while the exception mechanism is improved.Also applies to: 259-259, 295-295, 317-317, 328-328, 337-337
src/Makefile.qt.include (1)
123-123
: Properly integrated Spark Names UI components into the build system.The changes correctly add all necessary files for the new Spark Names feature to the appropriate build variables, following the established pattern for Qt UI components integration.
Also applies to: 185-186, 269-270, 460-461
src/qt/receivecoinsdialog.cpp (5)
17-17
: Clean include addition for Spark Name functionality.The new createsparknamepage.h include is appropriately placed with other UI component includes.
56-60
: Good UI visibility control based on address type.When "Spark" is selected, the createSparkNameButton is made visible, while hiding it for other address types. This follows good UI practices by showing contextually relevant controls.
90-90
: Proper signal/slot connection for the new button.The connection between button click and handler method follows Qt best practices.
141-141
: Consistent state reset in clear() method.The createSparkNameButton visibility is properly reset when clearing the dialog, maintaining UI consistency.
468-473
: Well-implemented Spark Name creation method.The createSparkName method follows the established pattern for creating dialogs in this codebase:
- Creates a new dialog with appropriate parameters
- Sets deletion on close for proper memory management
- Sets the model for data access
- Shows the dialog to the user
src/qt/addresstablemodel.h (3)
60-60
: Well-defined constant for Spark Names.The addition of the static string constant
SparkName
follows the existing pattern for address types in this class.
94-94
: Good addition of method for Spark Name processing.The
ProcessPendingSparkNameChanges()
method extends the model's functionality to handle Spark Names.
96-96
: Improved method declaration with const qualifier.The addition of the
const
qualifier togetWalletModel()
correctly indicates that this getter method doesn't modify the object's state, which is a good C++ practice.src/rpc/blockchain.cpp (4)
28-28
: Appropriate include for Spark Name functionality.The sparkname.h include is correctly added for using the Spark Name related classes and functions.
180-216
: Well-implemented getsparknames RPC method.The implementation follows best practices with:
- Proper help text and usage documentation
- Thread safety with LOCK(cs_main)
- Feature activation check (spark::IsSparkAllowed())
- Structured JSON response format
Don't forget to update the High Frequency (HF) block number before merging as mentioned in the PR objectives.
218-262
: Well-implemented getsparknamedata RPC method.The method follows best practices with:
- Proper parameter validation
- Comprehensive help text
- Thread safety with LOCK(cs_main)
- Feature activation check
- Structured return data with address, block height, and additional info
746-747
: Correctly registered new RPC commands.The new RPC methods are properly registered in the command table, making them available through the RPC interface.
src/qt/bitcoinaddressvalidator.cpp (1)
70-74
: Good extension of address validation for Spark Name notation.The validation logic is extended to support Spark Name format by allowing:
- '@' character at the start of the address
- '.', '-', and '_' characters after '@'
This follows standard username/handle formats used in many systems.
🧰 Tools
🪛 Cppcheck (2.10-2)
[performance] 74-74: Function parameter 'hash_bind_inner' should be passed by const reference.
(passedByValue)
src/wallet/lelantusjoinsplitbuilder.cpp (1)
195-219
:❓ Verification inconclusive
Implementation properly enforces Sigma protocol end at specified block height
The conditional check ensures that the balance calculation logic from
sigmaCoins
only runs if the current blockchain height is less thannSigmaEndBlock
, which aligns with the PR's objective to close the Sigma pool.Please remember to update the HF block number for
nSigmaEndBlock
as mentioned in the PR comments before merging.
🏁 Script executed:
#!/bin/bash # Verify if the HF block number is defined and what its value is grep -rn "nSigmaEndBlock" --include="*.cpp" --include="*.h" .Length of output: 961
Ensure the HF block number for nSigmaEndBlock is updated across all configurations
The logic in the file
src/wallet/lelantusjoinsplitbuilder.cpp
correctly enforces ending the Sigma protocol prior to the HF, as the condition checks the block height againstnSigmaEndBlock
. However, make sure that all references for the HF block update, as seen inchainparams.cpp
, are updated in line with the PR comments (e.g., replacing or updating values for mainnet and testnet) before merging.
- Verify that
consensus.nSigmaEndBlock
is set to the intended HF value in all relevant locations (e.g., see usage insrc/chainparams.cpp
andsrc/validation.cpp
).- Double-check that any changes to the HF block number in
nSigmaEndBlock
are propagated consistently in the codebase.src/wallet/wallet.cpp (2)
8872-8886
: Well-implemented method for creating Spark name transactions.This new method provides a clean wrapper around
sparkWallet->CreateSparkNameTransaction()
with proper wallet availability and lock status checks, ensuring the operation can only be performed when the wallet is unlocked.
8255-8257
: Good addition of null check to prevent potential crashes.The addition of the null check for
sparkWallet
before calling its method improves code robustness by preventing null pointer dereferences that could occur if the Spark wallet is not initialized.src/qt/addressbookpage.cpp (3)
125-126
: LGTM: Spark names option added to SendingTabThe code adds a "Spark names" option to the address type dropdown when in SendingTab and Spark is allowed, which is a logical extension for the new functionality.
247-253
: LGTM: Custom dialog for Spark name creationGood implementation of custom handling for Spark name creation with proper memory management through Qt::WA_DeleteOnClose.
413-415
:✅ Verification successful
Verify that ProcessPendingSparkNameChanges() implementation exists
The code calls
model->ProcessPendingSparkNameChanges()
when switching to Spark names, but there's no context about this method. Ensure it's properly implemented in the AddressTableModel class.
🏁 Script executed:
#!/bin/bash # Check if ProcessPendingSparkNameChanges() is implemented in AddressTableModel grep -r "ProcessPendingSparkNameChanges" --include="*.cpp" --include="*.h" .Length of output: 314
Action Required: Confirmed Implementation of
ProcessPendingSparkNameChanges()
The method is correctly declared in src/qt/addresstablemodel.h and implemented in src/qt/addresstablemodel.cpp. No further changes are required.
src/wallet/rpcwallet.cpp (3)
3812-3824
: Feature addition for spark name resolution in spendspark function.This code adds support for resolving spark names when addresses start with '@', enhancing usability by allowing users to send to registered spark names instead of only full spark addresses.
3911-3989
: New RPC method to register spark names.This adds a new
registersparkname
function that allows users to register a spark name associated with a spark address for a specified duration in years. The implementation:
- Properly validates input parameters (name length, years validity)
- Ensures Spark is active on the network
- Uses consensus parameters to calculate appropriate fees
- Creates and commits the transaction with proper error handling
5861-5861
: Registration of new RPC command.The new
registersparkname
function is properly registered in the RPC commands table.src/libspark/keys.cpp (4)
3-3
: Include file is valid and necessary.
No issues identified with adding the"transcript.h"
header.
247-259
: Implementation ofchallenge
looks consistent and clear.
The transcript-based approach for deriving a scalar challenge is straightforward and appears correct. Keep in mind any future domain-separation requirements if additional usage contexts arise.
292-310
: Ownership proof verification logic is correct and well-structured.
The comparison of computed group elements (left vs. right) is straightforward, and exceptions are thrown for invalid states.
262-290
:✅ Verification successful
Ensure cryptographically secure randomization.
The proof construction process is logically correct, but it relies onScalar::randomize
for randomness. Consider verifying thatScalar::randomize
is backed by a cryptographically secure PRNG to avoid potential vulnerabilities.Below is a sample script to locate and inspect
Scalar::randomize
calls in the repository:
🏁 Script executed:
#!/bin/bash # We will search references to Scalar::randomize to confirm it uses a cryptographically secure source. rg "Scalar::randomize" -A 5Length of output: 416
Secure PRNG Verified
The investigation confirms that
Scalar::randomize
is implemented insrc/secp256k1/src/cpp/Scalar.cpp
using OpenSSL’sRAND_bytes
, ensuring cryptographically secure randomization. No further action is needed regarding this aspect.src/libspark/test/ownership_test.cpp (1)
42-69
: Completeness test is appropriate.
Verifies that a freshly constructed proof indeed passes verification. This confirms end-to-end correctness.src/libspark/keys.h (4)
7-8
: New includes appear necessary.
No issues spotted. The header additions align with the new ownership proof functionality.
87-92
: Method signatures provide necessary parameters.
challenge
andprove_own
align well with the definitions in the.cpp
and use appropriate parameter types.
93-95
: Ownership verification signature is consistent.
The function parameters match the implementation, and returning a boolean makes the API usage clear.
96-98
: Comparison operator is a convenient addition.
Relying onencode(0)
is a clean approach. Ensure thatencode(0)
is sufficiently stable and unambiguous for comparing addresses.src/validation.cpp (6)
54-54
: Header inclusion for Spark name functionality.The inclusion of
sparkname.h
is necessary to support the new Spark name feature being implemented.
666-666
: Enhanced payload size validation with larger limit after Sigma end block.The condition has been modified to allow a larger transaction payload size (
NEW_MAX_TX_EXTRA_PAYLOAD
) after the Sigma pool is closed, while maintaining the original size limit (MAX_TX_EXTRA_PAYLOAD
) during the Sigma phase.
932-932
: Added Spark name transaction data structure.This variable is used to store and track Spark name information during transaction validation.
970-973
: Implemented Sigma pool closure logic.This check prevents Sigma-to-Lelantus conversions after reaching the block height where the Sigma pool is officially closed. This aligns with the PR objective of closing the Sigma pool.
1022-1032
: Added Spark name transaction validation.This code properly validates Spark name transactions by:
- Obtaining a singleton instance of the Spark name manager
- Verifying the transaction against the manager's validation rules
- Checking for conflicts with existing Spark names in the mempool
The implementation is thorough and handles error cases appropriately.
1642-1643
: Added Spark name to mempool tracking.When a valid Spark name transaction is accepted, this code adds the name to the mempool's tracking map, storing both the name (converted to uppercase) and the associated Spark address and transaction hash. This ensures uniqueness of names in the pending transaction pool.
src/chainparams.cpp (8)
173-179
: Ensure safe indexing and clarify usage.This array drives the spark name fee by name length but can trigger out-of-bounds access if the spark name length exceeds 20. Also, a
-1
fee for index 0 may cause confusion. Please verify all call sites to ensure that indexing always remains within bounds and document how index 0 is intended to be used.
419-419
: Confirm Sigma end-block correctness.Please verify that
ZC_SIGMA_END_BLOCK
aligns with the final expected height for Sigma closure, particularly given ongoing references to high-frequency block updates.
493-495
: Revisit theINT_MAX
placeholder.Setting
nSparkNamesStartBlock
toINT_MAX
effectively disables Spark Names on mainnet. This might be intentional for future activation, but be sure to update it once the final HF (hard fork) block number is determined per PR objectives.
737-737
: Double-check the testnet Sigma end-block.
ZC_SIGMA_TESTNET_END_BLOCK
should be the correct final block height for Sigma on testnet. Ensure it is fully aligned with your timeline and test phases.
798-798
: Coordinate spark name start block with final testnet plan.
- Line 798 sets an exchange address start block. Check if it remains consistent with
nSparkNamesStartBlock = 174000
introduced on lines 800-801.- Verify that block 174000 is the intended go-live for spark names on testnet and update if it is merely a placeholder.
Also applies to: 800-801
1000-1000
: Validate devnet Sigma closure height.
consensus.nSigmaEndBlock = 3600
ends Sigma on devnet at a relatively low block number. Confirm whether this is correct or just for testing.
1047-1049
: Check devnet spark name parameters.
nSparkNamesStartBlock = 3500
may overlap quickly withnSigmaEndBlock = 3600
. Confirm you want spark names to begin so close to Sigma closure. If it’s a placeholder, ensure the HF number is revised before merging.
1292-1294
: Regtest spark name deployment.
nSparkNamesStartBlock = 2000
in regtest is fine for local testing, but if there's a reason to test earlier or later blocks, remember to adjust it.src/qt/createsparknamepage.cpp (3)
17-27
: Validate name length and years range in the constructor.The constructor sets
numberOfYearsEdit
to [1..10], which is sensible, but confirm that you won’t need testing for durations beyond 10 years. Also, ensure that spark name length restrictions match the fee array indexing to avoid out-of-bounds usage.
73-82
: Confirm the fee handling for edge lengths.
updateFee()
sets the label to"?"
ifsparkName
length is zero or too large, avoiding indexing. This is good. Just ensure the fee array usage is consistently guarded in all relevant code paths (e.g., inCreateSparkNameTransaction
).
83-149
: Account for potential concurrency in wallet locks.Inside
CreateSparkNameTransaction
, you acquireLOCK(cs_main)
andLOCK(pwalletMain->cs_wallet)
. This is correct for concurrency protection, but confirm no other higher-level locks or potential deadlocks exist. Otherwise, the flow and error handling look well-structured for transaction creation.src/qt/walletmodel.h (2)
160-162
: Check spark address generation reliability.
generateSparkAddress()
must handle rare errors (e.g. wallet locked). If an edge case arises, ensure it fails gracefully or returns an empty string so the UI can handle it appropriately.
209-213
: Ensure consistent coin control usage.
prepareSparkNameTransaction(...)
accepts acoinControl
pointer but is often passednullptr
. Keep verifying that coin selection is correct or that calling code is aware that the transaction might pick from any coins by default.src/qt/addresstablemodel.cpp (2)
105-113
: Good concurrency practice.
Lockingcs_pendingSparkNameChanges
for appending topendingSparkNameChanges
ensures thread safety. This approach helps prevent data races.
230-231
: Avoid misclassification when checkinglabel.startsWith("@")
.
Relying on@
as the only indicator of a SparkName could lead to accidental classification if a custom label starts with '@'. Consider a more robust SparkName detection mechanism.src/spark/sparkwallet.h (1)
14-14
: Verify header inclusion dependencies.
Includingsparkname.h
here might introduce layering risks. Confirm there's no cyclical dependency or consider forward declarations if feasible.src/spark/state.cpp (6)
2-2
: No concerns with the new include.
The addition of#include "sparkname.h"
is straightforward and should not introduce any conflicts or overhead.
36-36
: Consistent synchronization with spark name manager.
CallingCSparkNameManager::GetInstance()->AddBlock(blockIndex)
ensures the spark name manager is kept in sync with spark state during index building.
246-247
: Initialization offBackupRewrittenSparkNames
.
This boolean flag is used for tracking spark name changes within the block. The initialization appears correct and scoped properly.
343-346
: Removal and addition of spark names at block connection.
UsingRemoveSparkNamesLosingValidity()
followed byAddBlock()
is consistent with the life cycle of spark names, ensuring stale names are cleaned up.
390-392
: Ensuring name manager consistency on disconnect.
InvokingRemoveBlock(pindexDelete)
onsparkNameManager
during tip disconnection maintains accurate spark name state when the chain is rolled back.
316-328
:✅ Verification successful
Verify spark name validity blocks to avoid out-of-range usage.
Storing future validity aspindexNew->nHeight + sparkName.second.sparkNameValidityBlocks
is fine, but consider validating that large values ofsparkNameValidityBlocks
won't cause an overflow or an overly large end height.Please check if there's an upper limit or validation logic for
sparkNameValidityBlocks
elsewhere. For example:
🏁 Script executed:
#!/bin/bash # Checking for upper bound validations on sparkNameValidityBlocks rg -A 8 "sparkNameValidityBlocks"Length of output: 6407
Spark Name Validity Validation Confirmed
The codebase already enforces an upper bound on
sparkNameValidityBlocks
(insrc/sparkname.cpp
, the value is capped by checking if it exceedsnBlockPerYear * 10
, effectively limiting validity to 10 years). As a result, adding this value topindexNew->nHeight
insrc/spark/state.cpp
is safe from overflow issues. No further overflow protection is required in this context.src/spark/sparkwallet.cpp (4)
11-11
: Include ofsparkname.h
is straightforward.
No issues with adding#include "sparkname.h"
.
268-269
: Implementation utilizes new helper method.
Delegating toisAddressMine(address)
is consistent and helps maintain code clarity.
1315-1315
: Call toSelectSparkCoins
updated to include extra param.
No functional issues spotted; keeps fee estimation aligned with the additional minted or spent values.
1728-1729
: ExtendedSelectSparkCoins
signature with extra size parameter.
ProvidingadditionalTxSize
is a beneficial approach to refining fee calculations.src/qt/walletmodel.cpp (3)
1579-1589
: Validate dev fund address correctness for Spark Name transactions.This method relies on
consensusParams.stage3DevelopmentFundAddress
for Spark name fees. Ensure this address is valid for all networks (main, test, regtest) and handle cases where the address might be absent or incorrect.
1590-1601
: Straightforward address lookup looks good.The
getSparkNameAddress
method correctly locks the chain and returns the Spark address or an empty string if not found. No immediate issues detected.
1602-1656
: Transaction preparation logic appears consistent.The method parallels other transaction flows, properly handles insufficient funds, runtime errors, and fee checks. Its structure and exception handling look fine and match other wallet transaction patterns.
src/test/sparkname_tests.cpp (3)
16-28
: Proper resetting of Spark state in constructor and destructor.The constructor/destructor pattern ensures the global Spark state is reset after tests. This approach minimizes side effects between test runs.
40-46
: Effective initialization with minting and block generation.The
Initialize()
method sets up the test wallet by generating coins and blocks. This step is crucial for ensuring the tests run in a realistic environment with sufficient funds.
128-255
: Extensive coverage of SparkName flows.Tests validate scenarios like mempool conflicts, block invalidation, fee calculations, ownership proofs, etc. Overall coverage looks robust, enhancing confidence in SparkName functionality.
🧰 Tools
🪛 Cppcheck (2.10-2)
[error] 128-128: There is an unknown macro here somewhere. Configuration is required. If BOOST_FIXTURE_TEST_SUITE is a macro then please configure it.
(unknownMacro)
src/sparkname.h (2)
21-57
: Comprehensive serialization for Spark alias transaction data.The
CSparkNameTxData
class properly defines fields (e.g., name, address, validity blocks) with serialization. Ensure future changes to the data structure remain backward compatible if needed.
64-89
: Block-index data structure looks straightforward.
CSparkNameBlockIndexData
holds SparkName info for indexing. The constructor’s usage of brace initialization is clear. Serialization is standard. No immediate concerns.
src/qt/sendcoinsentry.cpp
Outdated
if (ui->payTo->text().startsWith("@") && ui->payTo->text().size() <= CSparkNameManager::maximumSparkNameLength+1) { | ||
ui->payTo->setValid(true); | ||
} | ||
else if (!(model->validateAddress(ui->payTo->text()) || model->validateSparkAddress(ui->payTo->text()) || isPcodeEntry)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Added support for Spark name addresses.
This modification enhances address validation to recognize Spark name addresses that start with "@" and follow the maximum length constraint. This integrates with the new Spark Names feature mentioned in the PR objectives.
Consider adding a dedicated validation function for Spark names instead of embedding this logic directly in the validate method. This would improve code maintainability and make it easier to update validation rules in the future.
-if (ui->payTo->text().startsWith("@") && ui->payTo->text().size() <= CSparkNameManager::maximumSparkNameLength+1) {
- ui->payTo->setValid(true);
-}
-else if (!(model->validateAddress(ui->payTo->text()) || model->validateSparkAddress(ui->payTo->text()) || isPcodeEntry))
+if (model->validateSparkNameAddress(ui->payTo->text()) || model->validateAddress(ui->payTo->text()) ||
+ model->validateSparkAddress(ui->payTo->text()) || isPcodeEntry) {
+ ui->payTo->setValid(true);
+}
+else
Add a new validation method to WalletModel
:
bool WalletModel::validateSparkNameAddress(const QString &address) const
{
return address.startsWith("@") && address.size() <= CSparkNameManager::maximumSparkNameLength+1;
}
src/qt/bitcoinaddressvalidator.cpp
Outdated
// check for spark name | ||
if (address[0] == '@' && address.size() <= CSparkNameManager::maximumSparkNameLength + 1) | ||
return true; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify CSparkNameManager inclusion for Spark Name validation.
The code correctly identifies Spark Names by checking for the '@' prefix and verifying the name is within the maximum allowed length. However, there appears to be a dependency on CSparkNameManager::maximumSparkNameLength
without an explicit include for this class.
Ensure that the CSparkNameManager class is properly included or forward-declared in this file or in one of the included headers. Without this, there could be compilation errors.
🏁 Script executed:
#!/bin/bash
# Check for CSparkNameManager inclusion in relevant files
echo "Checking for CSparkNameManager in bitcoinaddressvalidator.h/cpp..."
grep -r "class CSparkNameManager" --include="*.h" --include="*.cpp" src/
grep -r "#include.*sparkname" --include="*.h" --include="*.cpp" src/qt/
Length of output: 581
Attention: Missing explicit inclusion for CSparkNameManager may lead to compilation issues.
The Spark Name validation correctly checks that the address starts with an '@' and that its length is within the allowed limit by referencing CSparkNameManager::maximumSparkNameLength
. However, our repository search confirms that while CSparkNameManager
is defined in src/sparkname.h
, there is no explicit include for this header in src/qt/bitcoinaddressvalidator.cpp
. Please ensure that either:
src/qt/bitcoinaddressvalidator.cpp
includessparkname.h
directly, or- An already-included header reliably provides the declaration for
CSparkNameManager
.
This will prevent potential compilation errors due to the missing dependency.
src/qt/createsparknamepage.cpp
Outdated
void CreateSparkNamePage::accept() | ||
{ | ||
QString sparkName = ui->sparkNameEdit->text(); | ||
QString sparkAddress = ui->sparkAddressEdit->text(); | ||
int numberOfYears = ui->numberOfYearsEdit->value(); | ||
QString additionalInfo = ui->additionalInfoEdit->toPlainText(); | ||
QString strError; | ||
|
||
if (!model->validateSparkAddress(sparkAddress)) | ||
QMessageBox::critical(this, tr("Error"), tr("Invalid spark address")); | ||
else if (!model->validateSparkNameData(sparkName, sparkAddress, additionalInfo, strError)) | ||
QMessageBox::critical(this, tr("Error"), tr("Error details: ") + strError); | ||
else { | ||
if (CreateSparkNameTransaction(sparkName.toStdString(), sparkAddress.toStdString(), numberOfYears, additionalInfo.toStdString())) | ||
QDialog::accept(); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle validation failures more robustly.
When validation fails, consider returning immediately after showing the error message instead of continuing block checks. Currently, the code shows errors but continues if statements might run. Ensure the user sees only one relevant error message at a time.
- if (!model->validateSparkAddress(sparkAddress))
- QMessageBox::critical(this, tr("Error"), tr("Invalid spark address"));
- else if (!model->validateSparkNameData(...)) {
- ...
- } else {
- ...
- }
+ if (!model->validateSparkAddress(sparkAddress)) {
+ QMessageBox::critical(this, tr("Error"), tr("Invalid spark address"));
+ return;
+ }
+ if (!model->validateSparkNameData(...)) {
+ ...
+ return;
+ }
+ // Otherwise, proceed...
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
void CreateSparkNamePage::accept() | |
{ | |
QString sparkName = ui->sparkNameEdit->text(); | |
QString sparkAddress = ui->sparkAddressEdit->text(); | |
int numberOfYears = ui->numberOfYearsEdit->value(); | |
QString additionalInfo = ui->additionalInfoEdit->toPlainText(); | |
QString strError; | |
if (!model->validateSparkAddress(sparkAddress)) | |
QMessageBox::critical(this, tr("Error"), tr("Invalid spark address")); | |
else if (!model->validateSparkNameData(sparkName, sparkAddress, additionalInfo, strError)) | |
QMessageBox::critical(this, tr("Error"), tr("Error details: ") + strError); | |
else { | |
if (CreateSparkNameTransaction(sparkName.toStdString(), sparkAddress.toStdString(), numberOfYears, additionalInfo.toStdString())) | |
QDialog::accept(); | |
} | |
} | |
void CreateSparkNamePage::accept() | |
{ | |
QString sparkName = ui->sparkNameEdit->text(); | |
QString sparkAddress = ui->sparkAddressEdit->text(); | |
int numberOfYears = ui->numberOfYearsEdit->value(); | |
QString additionalInfo = ui->additionalInfoEdit->toPlainText(); | |
QString strError; | |
if (!model->validateSparkAddress(sparkAddress)) { | |
QMessageBox::critical(this, tr("Error"), tr("Invalid spark address")); | |
return; | |
} | |
if (!model->validateSparkNameData(sparkName, sparkAddress, additionalInfo, strError)) { | |
QMessageBox::critical(this, tr("Error"), tr("Error details: ") + strError); | |
return; | |
} | |
// Otherwise, proceed with creating the spark name transaction | |
if (CreateSparkNameTransaction(sparkName.toStdString(), sparkAddress.toStdString(), numberOfYears, additionalInfo.toStdString())) | |
QDialog::accept(); | |
} |
src/qt/createsparknamepage.h
Outdated
Ui::CreateSparkNamePage *ui; | ||
WalletModel *model; | ||
|
||
bool CreateSparkNameTransaction(const std::string &name, const std::string &address, int numberOfYears, const std::string &additionalInfo); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Separate transaction logic from the UI layer.
Creating transactions within this dialog can complicate maintainability and testing. Consider placing CreateSparkNameTransaction()
in the wallet/model layer and exposing a simpler interface to the UI.
src/sparkname.cpp
Outdated
if (nHeight < consensusParams.nSparkNamesStartBlock) | ||
return state.DoS(100, error("CheckSparkNameTx: spark names are not allowed before block %d", consensusParams.nSparkStartBlock)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the mismatch in the block check error message.
The code checks nSparkNamesStartBlock
but logs nSparkStartBlock
. This mismatch can lead to confusion and make debugging difficult.
A possible fix:
- return state.DoS(100, error("CheckSparkNameTx: spark names are not allowed before block %d", consensusParams.nSparkStartBlock));
+ return state.DoS(100, error("CheckSparkNameTx: spark names are not allowed before block %d", consensusParams.nSparkNamesStartBlock));
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (nHeight < consensusParams.nSparkNamesStartBlock) | |
return state.DoS(100, error("CheckSparkNameTx: spark names are not allowed before block %d", consensusParams.nSparkStartBlock)); | |
if (nHeight < consensusParams.nSparkNamesStartBlock) | |
return state.DoS(100, error("CheckSparkNameTx: spark names are not allowed before block %d", consensusParams.nSparkNamesStartBlock)); |
src/sparkname.cpp
Outdated
for (uint32_t n=0; ; n++) { | ||
sparkNameData.addressOwnershipProof.clear(); | ||
sparkNameData.hashFailsafe = n; | ||
|
||
CMutableTransaction txCopy(txSparkSpend); | ||
CDataStream serializedSparkNameData(SER_NETWORK, PROTOCOL_VERSION); | ||
serializedSparkNameData << sparkNameData; | ||
txCopy.vExtraPayload.insert(txCopy.vExtraPayload.end(), serializedSparkNameData.begin(), serializedSparkNameData.end()); | ||
|
||
CHashWriter ss(SER_GETHASH, PROTOCOL_VERSION); | ||
ss << txCopy; | ||
|
||
spark::Scalar m; | ||
try { | ||
m.SetHex(ss.GetHash().ToString()); | ||
} | ||
catch (const std::exception &) { | ||
continue; // increase hashFailSafe and try again | ||
} | ||
|
||
spark::Address sparkAddress(spark::Params::get_default()); | ||
spark::OwnershipProof ownershipProof; | ||
|
||
sparkAddress.decode(sparkNameData.sparkAddress); | ||
sparkAddress.prove_own(m, spendKey, incomingViewKey, ownershipProof); | ||
|
||
CDataStream ownershipProofStream(SER_NETWORK, PROTOCOL_VERSION); | ||
ownershipProofStream << ownershipProof; | ||
|
||
sparkNameData.addressOwnershipProof.assign(ownershipProofStream.begin(), ownershipProofStream.end()); | ||
|
||
CDataStream sparkNameDataStream(SER_NETWORK, PROTOCOL_VERSION); | ||
sparkNameDataStream << sparkNameData; | ||
|
||
txSparkSpend.vExtraPayload.insert(txSparkSpend.vExtraPayload.end(), sparkNameDataStream.begin(), sparkNameDataStream.end()); | ||
|
||
break; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add a safeguard against an infinite loop in hash failover logic.
The for (uint32_t n=0; ; n++)
loop can, in theory, continue indefinitely if the resultant hash keeps failing range checks. Though highly unlikely, it is safer to add an upper bound or fallback, especially if there’s a possibility of invalid or adversarial input.
for (uint32_t n=0; n < 100000; n++) {
...
}
// Optionally handle the scenario if we never break out after attempts
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for (uint32_t n=0; ; n++) { | |
sparkNameData.addressOwnershipProof.clear(); | |
sparkNameData.hashFailsafe = n; | |
CMutableTransaction txCopy(txSparkSpend); | |
CDataStream serializedSparkNameData(SER_NETWORK, PROTOCOL_VERSION); | |
serializedSparkNameData << sparkNameData; | |
txCopy.vExtraPayload.insert(txCopy.vExtraPayload.end(), serializedSparkNameData.begin(), serializedSparkNameData.end()); | |
CHashWriter ss(SER_GETHASH, PROTOCOL_VERSION); | |
ss << txCopy; | |
spark::Scalar m; | |
try { | |
m.SetHex(ss.GetHash().ToString()); | |
} | |
catch (const std::exception &) { | |
continue; // increase hashFailSafe and try again | |
} | |
spark::Address sparkAddress(spark::Params::get_default()); | |
spark::OwnershipProof ownershipProof; | |
sparkAddress.decode(sparkNameData.sparkAddress); | |
sparkAddress.prove_own(m, spendKey, incomingViewKey, ownershipProof); | |
CDataStream ownershipProofStream(SER_NETWORK, PROTOCOL_VERSION); | |
ownershipProofStream << ownershipProof; | |
sparkNameData.addressOwnershipProof.assign(ownershipProofStream.begin(), ownershipProofStream.end()); | |
CDataStream sparkNameDataStream(SER_NETWORK, PROTOCOL_VERSION); | |
sparkNameDataStream << sparkNameData; | |
txSparkSpend.vExtraPayload.insert(txSparkSpend.vExtraPayload.end(), sparkNameDataStream.begin(), sparkNameDataStream.end()); | |
break; | |
} | |
} | |
for (uint32_t n=0; n < 100000; n++) { | |
sparkNameData.addressOwnershipProof.clear(); | |
sparkNameData.hashFailsafe = n; | |
CMutableTransaction txCopy(txSparkSpend); | |
CDataStream serializedSparkNameData(SER_NETWORK, PROTOCOL_VERSION); | |
serializedSparkNameData << sparkNameData; | |
txCopy.vExtraPayload.insert(txCopy.vExtraPayload.end(), serializedSparkNameData.begin(), serializedSparkNameData.end()); | |
CHashWriter ss(SER_GETHASH, PROTOCOL_VERSION); | |
ss << txCopy; | |
spark::Scalar m; | |
try { | |
m.SetHex(ss.GetHash().ToString()); | |
} | |
catch (const std::exception &) { | |
continue; // increase hashFailSafe and try again | |
} | |
spark::Address sparkAddress(spark::Params::get_default()); | |
spark::OwnershipProof ownershipProof; | |
sparkAddress.decode(sparkNameData.sparkAddress); | |
sparkAddress.prove_own(m, spendKey, incomingViewKey, ownershipProof); | |
CDataStream ownershipProofStream(SER_NETWORK, PROTOCOL_VERSION); | |
ownershipProofStream << ownershipProof; | |
sparkNameData.addressOwnershipProof.assign(ownershipProofStream.begin(), ownershipProofStream.end()); | |
CDataStream sparkNameDataStream(SER_NETWORK, PROTOCOL_VERSION); | |
sparkNameDataStream << sparkNameData; | |
txSparkSpend.vExtraPayload.insert(txSparkSpend.vExtraPayload.end(), sparkNameDataStream.begin(), sparkNameDataStream.end()); | |
break; | |
} | |
// Optionally handle the scenario if we never break out after attempts | |
} |
Rebase includes: * UI improvements (#1475) * Lelantus UI stripped * More improvements * Fixed UI hang issue during batch verification * QT hanging improvements * QT hanging fix * Review commits applied * coderabbit comments applied * more coderabbitai comments resolved --------- Co-authored-by: firstcryptoman <firstcryptoman@gmail.com> * Spark names (#1532) * Initial spark name architecture * Spark address ownership proofs implemented. * Missing files added * Check the ownership proof for spark names: initial implementation * Fixes to the core part of spark names * Added additional field (core) * Consensus parameters for spark names * Fixed mempool bug * Fixes in spark name conflict resolution * RPCs for spark names * Additional API for spark names tx creation * Changed way of checking spark name tx * Wallet API for spark name transaction creation * API changes for spark name tx creation * Added registersparkname RPC call * Spark activation check for RPC * Make spark names case-insensitive * Spark name RPC fixes * Faster lookup for spark name by spark address * Fixes for spark name/address lookup * Improvements for duplicated address detection * Fixes for spark name state * Block index entries for spark names * Make dot (.) a legit symbol in spark name * Spark name block number for testnet * Fixed restoration of old spark name data if block is disconnected * API for quick check of spark name transaction validity before the creation * added isAddressMine function * Check if the address belongs to the wallet before creating spark name transaction * Fixed fee calculation for spark name * Fix for spark names RPC * Added ability to spend to spark names in "sparkspend" RPC * UI fixes * Additional validations * Fix for crash with spend to unregistered name * Fixed fee value when registering spark name for more than one year * Spark name UI improvements * UI modifications for sending to spark name * Address book fixes related to spark names * Fixed period of validity when creating spark name through GUI * Extended spark name error info for RPC * Fixed crash on non-HD wallet * Request wallet password for spark name creation * Fixed fee calculation for the spark name tx * Fixed detection of previously used spark address for a spark name * Unit test for spark names * Additional unit tests * Fixes #1533 * getsparknamedata RPC now returns JSON object * Added "My own spark names" to the dropdown list in address book * Added an option of displaying only own spark names for RPC. Closes #1535 * Set HF block for spark names * Fixed a check for spark name block start * Added tests for correctly respecting HF block number * Check if we're over HF before spark name transaction creation * new rpc for spark name (#1552) * Fixed spark name tests * Changed HF date * Change of HF block number --------- Co-authored-by: levonpetrosyan93 <petrosyan.levon93@gmail.com> Co-authored-by: levoncrypto <levoncrypto1994@gmail.com> Co-authored-by: levoncrypto <95240473+levoncrypto@users.noreply.github.com> Co-authored-by: levonpetrosyan93 <45027856+levonpetrosyan93@users.noreply.github.com> * Export View Keys (#1543) * Add an RPC command to export the Spark view key. * Show Spark View Key in Qt. * Sigma pool closed, Extra payload extended (#1477) * Change of emission rules * Fixes for testnet * Cleaning up code and tests * Workaround for current devnet bugs * Workaround for testnet * Devnet parameter tweak * Sigma pool closed * Extra payload size limit increased * Changed HF block for testnet * Initial spark name architecture * Spark address ownership proofs implemented. * Missing files added * Check the ownership proof for spark names: initial implementation * Fixes to the core part of spark names * Added additional field (core) * Consensus parameters for spark names * Fixed mempool bug * Fixes in spark name conflict resolution * RPCs for spark names * Additional API for spark names tx creation * Changed way of checking spark name tx * Wallet API for spark name transaction creation * API changes for spark name tx creation * Added registersparkname RPC call * Spark activation check for RPC * Make spark names case-insensitive * Spark name RPC fixes * Faster lookup for spark name by spark address * Fixes for spark name/address lookup * Improvements for duplicated address detection * Fixes for spark name state * Block index entries for spark names * Make dot (.) a legit symbol in spark name * Spark name block number for testnet * Fixed restoration of old spark name data if block is disconnected * API for quick check of spark name transaction validity before the creation * added isAddressMine function * Check if the address belongs to the wallet before creating spark name transaction * Fixed fee calculation for spark name * Fix for spark names RPC * Added ability to spend to spark names in "sparkspend" RPC * UI fixes * Additional validations * Fix for crash with spend to unregistered name * Fixed fee value when registering spark name for more than one year * Spark name UI improvements * UI modifications for sending to spark name * Address book fixes related to spark names * Fixed period of validity when creating spark name through GUI * Extended spark name error info for RPC * Fixed crash on non-HD wallet * Request wallet password for spark name creation * Fixed fee calculation for the spark name tx * Fixed detection of previously used spark address for a spark name * Unit test for spark names * Additional unit tests * Fixes #1533 * Testnet HF block set * Mainnet HF block set --------- Co-authored-by: Peter Shugalev <peter@shugalev.com> Co-authored-by: levoncrypto <levoncrypto1994@gmail.com> * Build fix (#1553) * Build fix * coderabbitai comment resolved * Duplicated rpc removed * Bump version to v0.14.14.1 Spark Names (#1550) * secp256k1: CMake build system added * cmake: add cmake folder * bench: Add initial cmake support * Add initial main CMakeLists.txt * doc: add initial cmake support * univalue: add initial cmake support * zmq: add initial cmake support * crypto: add initial cmake support * wallet: add initial cmake support * src: initial add of src/CMakeLists.txt * toolchain.cmake.in: Adding toolchain.cmake.in support * crypto: add support for CMake function check. * bitcoin-cli: add CMake compilation. * firo-tx: add CMake compilation. Improve miscellaneous CMake files. * firod: add CMake compilation support. * Improved misc CMake compilation support. * Add bitcoin-config.h generation support. * gmp: fix gmp link on OS X * build: generate Linux installable package. * CMake: add tests (test_bitcoin) compilation support. * Qt: CMake GUI compilation support (Ubuntu) * src/qt: Add macOS support for GUI with CMake compilation. * depends: fix gmp compilation issue with mingw * build: Add MingW support for CMake build system * src: add backtrace compilation support macOS * src: add backtrace compilation support Linux and MinGW-w64 * CMake: apply CodeRabbitAI suggestions. * CMake: Add CI tasks for CMake build. * Remove static from functions to fix debug linking In release builds, these functions are inlined, avoiding linkage issues. Debug builds (without inlining) caused undefined references in dependent libraries due to static visibility. Removing static resolves this while maintaining intended behavior in optimized builds. * Add removed testcases and make BUILD_GUI=ON default option * Added documentation to readme.md and fixed a lelantus test issue * Fixed documentation, set ENABLE_WALLET=ON by default Remove outdated old sigma testcase * Rebase to Wed Apr 23 11:39:34 AM BST 2025 --------- Co-authored-by: levonpetrosyan93 <45027856+levonpetrosyan93@users.noreply.github.com> Co-authored-by: firstcryptoman <firstcryptoman@gmail.com> Co-authored-by: psolstice <peter@shugalev.com> Co-authored-by: levonpetrosyan93 <petrosyan.levon93@gmail.com> Co-authored-by: levoncrypto <levoncrypto1994@gmail.com> Co-authored-by: levoncrypto <95240473+levoncrypto@users.noreply.github.com> Co-authored-by: cassandras-lies <203535133+cassandras-lies@users.noreply.github.com> Co-authored-by: justanwar <42809091+justanwar@users.noreply.github.com>
Tested on regtest !
We need to put new HF block number before merge.