-
Notifications
You must be signed in to change notification settings - Fork 0
Add unique location slugs and enforce constraints #61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
BREAKING CHANGE: This update introduces new dependencies required for upcoming feature implementations. The `slug` package enables URL-friendly string generation, while `nanoid` provides unique identifiers. Type definitions ensure proper TypeScript integration.
Ensures unique location names per user. Part of migration 0003_chilly_professor_monster. BREAKING CHANGE: Existing location data with duplicate names per user will fail migration. Requires data cleanup before applying this migration.
Generate unique slugs using nanoid and slugify. Check for existing locations with the same name and generate a unique slug if necessary. Add error handling for duplicate slugs.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
WalkthroughThis update adds a composite unique index on the combination of Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant DashboardPage
participant API
participant DB
User->>DashboardPage: Submit new location form
DashboardPage->>API: POST /locations (with location data)
API->>DB: Check if location (name, userId) exists
alt Exists
API-->>DashboardPage: 409 Conflict (duplicate name)
else Not exists
API->>API: Generate slug from name
loop Until slug is unique
API->>DB: Check if slug exists
alt Slug exists
API->>API: Append random suffix to slug
end
end
API->>DB: Insert new location (with unique slug)
alt Insert success
API-->>DashboardPage: 200 OK (location created)
else Unique constraint error
API-->>DashboardPage: 409 Conflict (duplicate slug)
end
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~18 minutes Possibly related PRs
Poem
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 ESLint
pages/dashboard/add.vueOops! Something went wrong! :( ESLint: 9.31.0 Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/.nuxt/eslint.config.mjs' imported from /eslint.config.mjs server/api/locations.post.tsOops! Something went wrong! :( ESLint: 9.31.0 Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/.nuxt/eslint.config.mjs' imported from /eslint.config.mjs lib/db/schema/location.tsOops! Something went wrong! :( ESLint: 9.31.0 Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/.nuxt/eslint.config.mjs' imported from /eslint.config.mjs
Note ⚡️ Unit Test Generation - BetaCodeRabbit's unit test generation is now available in Beta! Automatically generate comprehensive unit tests for your code changes, ensuring better test coverage and catching edge cases you might miss. Our AI analyzes your code structure and creates tests that follow best practices and your project's testing patterns. Learn more here, or just try it under ✨ Finishing Touches. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (2)
stores/auth.ts (2)
28-29
: Remove superfluous semicolon after function declarationThe trailing semicolon after a
function
declaration is unnecessary and may violate ESLint rules such asno-extra-semi
.- }; + }
16-27
: Add basic error handling to prevent unhandled promise rejections
authClient.signIn.social
can fail (e.g., network issues, CSRF mismatch). An unhandled rejection will bubble up to the global handler and potentially break the SPA.async function signIn() { const { csrf } = useCsrf(); const headers = new Headers(); headers.append("csrf-token", csrf); - await authClient.signIn.social({ + try { + await authClient.signIn.social({ provider: "github", callbackURL: "/dashboard", errorCallbackURL: "/error", fetchOptions: { headers, }, - }); + }); + } catch (err) { + // TODO: surface a user-friendly error (toast/snackbar) + console.error("Sign-in failed", err); + } }Please confirm whether your global error handler already captures these failures; if so, feel free to disregard.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (8)
lib/db/migrations/0003_chilly_professor_monster.sql
(1 hunks)lib/db/migrations/meta/0003_snapshot.json
(1 hunks)lib/db/migrations/meta/_journal.json
(1 hunks)lib/db/schema/location.ts
(3 hunks)package.json
(2 hunks)pages/dashboard/add.vue
(3 hunks)server/api/locations.post.ts
(3 hunks)stores/auth.ts
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
server/api/locations.post.ts (1)
lib/db/schema/location.ts (3)
InsertLocation
(20-31)InsertLocation
(33-33)location
(8-18)
🔇 Additional comments (15)
stores/auth.ts (1)
15-15
: No-op formatting change acknowledgedThe extra blank line is purely cosmetic and does not impact behaviour.
Nothing further to do.package.json (1)
31-31
: All new dependencies are up to date and have no known vulnerabilities.
- nanoid@^5.1.5 → latest is 5.1.5
- slug@^11.0.0 → latest is 11.0.0
- @types/slug@^5.0.9 → latest is 5.0.9
No security advisories found at moderate severity or above. No further action required.
lib/db/migrations/meta/_journal.json (1)
26-32
: LGTM! Migration journal entry follows proper conventions.The new migration entry is properly structured and follows the established pattern for tracking database schema changes.
lib/db/migrations/0003_chilly_professor_monster.sql (1)
1-1
: Add data cleanup step before creating unique indexBefore applying this migration, verify and clean up any existing duplicate
(name, user_id)
pairs in thelocation
table, or theCREATE UNIQUE INDEX
will fail:• In lib/db/migrations/0003_chilly_professor_monster.sql (line 1), prepend a cleanup block to identify and remove duplicates.
• Example SQL you can run (or embed in the migration):-- 1. Find any duplicates SELECT name, user_id, COUNT(*) AS cnt FROM location GROUP BY name, user_id HAVING cnt > 1; -- 2. Delete duplicates, keeping the lowest-id row WITH ranked AS ( SELECT id, ROW_NUMBER() OVER (PARTITION BY name, user_id ORDER BY id) AS rn FROM location ) DELETE FROM location WHERE id IN (SELECT id FROM ranked WHERE rn > 1);• After cleanup, create the unique index as intended:
CREATE UNIQUE INDEX location_name_userId_unique ON location (name, user_id);Please run the above cleanup against your current data or embed it in the migration and confirm that no duplicates remain before merging.
pages/dashboard/add.vue (3)
9-9
: LGTM! Improved CSRF fetch usage.The change from
useCsrfFetch
to$csrfFetch
fromuseNuxtApp()
follows better Nuxt 3 patterns and the API call structure remains correct.Also applies to: 21-24
36-38
: Good improvement with finally block.Moving the loading state reset to a
finally
block ensures it always executes regardless of success or error, preventing the UI from getting stuck in a loading state.
7-7
: Schema import verified:InsertLocation
matches the backend export.The export in
lib/db/schema/location.ts
definesInsertLocation
, and the import inpages/dashboard/add.vue
correctly references it. No further changes needed—approving the code.lib/db/schema/location.ts (3)
1-3
: LGTM! Proper imports for schema constraints.The addition of type-only import for
z
and theunique
function import are appropriate for implementing the composite unique constraint.
18-18
: Excellent implementation of composite unique constraint.The
unique().on(t.name, t.userId)
constraint properly enforces uniqueness at the ORM level, complementing the database migration. This ensures location names are unique per user.
20-31
: Export rename complete and all references updatedI’ve verified that
InsertLocationSchema
no longer appears in the repository and all imports now use the newInsertLocation
export, for example:
pages/dashboard/add.vue
server/api/locations.post.ts
No further changes are needed.
server/api/locations.post.ts (4)
1-10
: LGTM! Well-organized imports.The imports are properly structured and all the necessary dependencies for slug generation and database operations are included.
11-18
: Authentication check is properly implemented.The handler correctly validates user authentication before proceeding.
19-40
: Excellent input validation with detailed error responses.The validation logic properly uses the schema and provides clear error messages for debugging.
41-54
: Good defensive programming with duplicate name check.The uniqueness check at the application level complements the database constraint and provides a better user experience with a clear error message.
lib/db/migrations/meta/0003_snapshot.json (1)
396-411
: Database constraints properly implement the PR requirements.The location table correctly defines:
- Unique constraint on
slug
(line 397-403)- Composite unique constraint on
name
anduser_id
(line 404-411)These constraints ensure data integrity at the database level.
"foreignKeys": { | ||
"location_user_id_user_id_fk": { | ||
"name": "location_user_id_user_id_fk", | ||
"tableFrom": "location", | ||
"tableTo": "user", | ||
"columnsFrom": [ | ||
"user_id" | ||
], | ||
"columnsTo": [ | ||
"id" | ||
], | ||
"onDelete": "no action", | ||
"onUpdate": "no action" | ||
} | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Consider using cascade delete for user-location relationship.
The foreign key constraint uses "onDelete": "no action"
which means locations will be orphaned if a user is deleted. This differs from other user-related tables (account, session) which use cascade delete.
Consider whether locations should be deleted when a user is deleted, or if orphaned locations are intentional for data retention purposes.
🏁 Script executed:
#!/bin/bash
# Check how other tables handle user deletion
echo "=== Checking foreign key constraints for user references ==="
rg -A 5 '"user_id".*fk"' lib/db/migrations/meta/
Length of output: 181
🏁 Script executed:
#!/bin/bash
# Verify how account and session tables handle user deletion in the current snapshot
echo "=== Account foreign key constraints ==="
rg -C5 '"tableFrom": "account"' lib/db/migrations/meta/0003_snapshot.json
echo "=== Session foreign key constraints ==="
rg -C5 '"tableFrom": "session"' lib/db/migrations/meta/0003_snapshot.json
Length of output: 900
🏁 Script executed:
#!/bin/bash
# Retrieve full foreign key definitions for account and session tables
echo "=== account_user_id_user_id_fk definition ==="
rg -C10 '"account_user_id_user_id_fk"' lib/db/migrations/meta/0003_snapshot.json
echo "=== session_user_id_user_id_fk definition ==="
rg -C10 '"session_user_id_user_id_fk"' lib/db/migrations/meta/0003_snapshot.json
Length of output: 1496
Align location FK with cascade delete for user removals
The location_user_id_user_id_fk
currently specifies
“onDelete”: “no action”
which will leave orphaned location records when a user is deleted. Both
account_user_id_user_id_fk
and session_user_id_user_id_fk
use
“onDelete”: “cascade”
for the same user_id → id
relationship.
Please update (or justify) the behavior:
• File: lib/db/migrations/meta/0003_snapshot.json (lines 413–427)
"foreignKeys": {
"location_user_id_user_id_fk": {
…
- "onDelete": "no action",
+ "onDelete": "cascade",
"onUpdate": "no action"
}
},
If retaining orphaned locations is intentional (for auditing or data‐retention), add a comment or documentation to explain this exception.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
"foreignKeys": { | |
"location_user_id_user_id_fk": { | |
"name": "location_user_id_user_id_fk", | |
"tableFrom": "location", | |
"tableTo": "user", | |
"columnsFrom": [ | |
"user_id" | |
], | |
"columnsTo": [ | |
"id" | |
], | |
"onDelete": "no action", | |
"onUpdate": "no action" | |
} | |
}, | |
"foreignKeys": { | |
"location_user_id_user_id_fk": { | |
"name": "location_user_id_user_id_fk", | |
"tableFrom": "location", | |
"tableTo": "user", | |
"columnsFrom": [ | |
"user_id" | |
], | |
"columnsTo": [ | |
"id" | |
], | |
"onDelete": "cascade", | |
"onUpdate": "no action" | |
} | |
}, |
🤖 Prompt for AI Agents
In lib/db/migrations/meta/0003_snapshot.json around lines 413 to 427, the
foreign key constraint "location_user_id_user_id_fk" uses "onDelete": "no
action", which can cause orphaned location records when a user is deleted. To
fix this, change the "onDelete" action to "cascade" to align with other user_id
foreign keys and ensure related location records are deleted automatically. If
keeping orphaned locations is intentional, add a comment explaining this
exception instead.
// Generate a slug from the names | ||
let slug = slugify(result.data.name); | ||
// Ensure the slug is unique | ||
// TODO: This could be optimized to avoid multiple queries | ||
// but for now, this is simple and works. | ||
let existing = !!(await db.query.location.findFirst({ | ||
where: eq(location.slug, slug), | ||
})); | ||
|
||
while (existing) { | ||
const id = nanoid(); | ||
const idSlug = `${slug}-${id}`; | ||
existing = !!(await db.query.location.findFirst({ | ||
where: eq(location.slug, idSlug), | ||
})); | ||
if (!existing) { | ||
slug = idSlug; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider optimizing slug generation to reduce database queries.
The current implementation works but has potential performance issues:
- Multiple sequential queries in the while loop could be slow
- The
existing
variable is reassigned which reduces readability
Consider this refactored approach for better performance and clarity:
- // Generate a slug from the names
- let slug = slugify(result.data.name);
- // Ensure the slug is unique
- // TODO: This could be optimized to avoid multiple queries
- // but for now, this is simple and works.
- let existing = !!(await db.query.location.findFirst({
- where: eq(location.slug, slug),
- }));
-
- while (existing) {
- const id = nanoid();
- const idSlug = `${slug}-${id}`;
- existing = !!(await db.query.location.findFirst({
- where: eq(location.slug, idSlug),
- }));
- if (!existing) {
- slug = idSlug;
- }
- }
+ // Generate a slug from the name
+ const baseSlug = slugify(result.data.name);
+ let slug = baseSlug;
+
+ // Check if the base slug exists
+ const existingSlug = await db.query.location.findFirst({
+ where: eq(location.slug, slug),
+ });
+
+ if (existingSlug) {
+ // Generate a unique slug with suffix
+ let attempts = 0;
+ const maxAttempts = 10; // Prevent infinite loops
+
+ do {
+ slug = `${baseSlug}-${nanoid()}`;
+ const collision = await db.query.location.findFirst({
+ where: eq(location.slug, slug),
+ });
+ if (!collision) break;
+ attempts++;
+ } while (attempts < maxAttempts);
+
+ if (attempts >= maxAttempts) {
+ throw new Error("Unable to generate unique slug after multiple attempts");
+ }
+ }
Would you like me to create an issue to track this optimization, or would you prefer a different approach such as using a database sequence or handling slug generation at the database level?
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Generate a slug from the names | |
let slug = slugify(result.data.name); | |
// Ensure the slug is unique | |
// TODO: This could be optimized to avoid multiple queries | |
// but for now, this is simple and works. | |
let existing = !!(await db.query.location.findFirst({ | |
where: eq(location.slug, slug), | |
})); | |
while (existing) { | |
const id = nanoid(); | |
const idSlug = `${slug}-${id}`; | |
existing = !!(await db.query.location.findFirst({ | |
where: eq(location.slug, idSlug), | |
})); | |
if (!existing) { | |
slug = idSlug; | |
} | |
} | |
// Generate a slug from the name | |
const baseSlug = slugify(result.data.name); | |
let slug = baseSlug; | |
// Check if the base slug exists | |
const existingSlug = await db.query.location.findFirst({ | |
where: eq(location.slug, slug), | |
}); | |
if (existingSlug) { | |
// Generate a unique slug with suffix | |
let attempts = 0; | |
const maxAttempts = 10; // Prevent infinite loops | |
do { | |
slug = `${baseSlug}-${nanoid()}`; | |
const collision = await db.query.location.findFirst({ | |
where: eq(location.slug, slug), | |
}); | |
if (!collision) break; | |
attempts++; | |
} while (attempts < maxAttempts); | |
if (attempts >= maxAttempts) { | |
throw new Error("Unable to generate unique slug after multiple attempts"); | |
} | |
} |
🤖 Prompt for AI Agents
In server/api/locations.post.ts around lines 55 to 73, the slug generation logic
uses multiple sequential database queries inside a while loop, which can cause
performance issues and reduces readability due to reassignment of the existing
variable. To fix this, refactor the code to minimize database queries by
batching checks or using a more efficient method like generating a unique slug
candidate once and verifying it, or consider implementing slug uniqueness at the
database level with constraints or sequences. Additionally, improve readability
by avoiding reassignment of the existing variable, for example by using a
clearer loop structure or helper functions.
try { | ||
const [created] = await db.insert(location).values({ | ||
...result.data, | ||
userId: event.context.user.id, | ||
slug, | ||
}).returning(); | ||
return created; | ||
} | ||
catch (e) { | ||
const error = e as DrizzleError; | ||
if (error.message.includes("Failed query")) { | ||
return sendError(event, createError({ | ||
statusCode: 409, | ||
statusMessage: "Slug must be unique (the location name is used to generate the slug).", | ||
})); | ||
} | ||
throw error; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error handling specificity and user messaging.
The current error handling has some issues:
- String-based error detection is fragile
- The error message mentions slug uniqueness but users don't directly control slugs
Consider these improvements:
try {
const [created] = await db.insert(location).values({
...result.data,
userId: event.context.user.id,
slug,
}).returning();
return created;
}
catch (e) {
const error = e as DrizzleError;
- if (error.message.includes("Failed query")) {
+ // Check for unique constraint violations more specifically
+ if (error.message.includes("UNIQUE constraint failed") ||
+ error.message.includes("location_slug_unique")) {
return sendError(event, createError({
statusCode: 409,
- statusMessage: "Slug must be unique (the location name is used to generate the slug).",
+ statusMessage: "Unable to create location. A location with a similar name may already exist. Please try a different name.",
}));
}
throw error;
}
This provides clearer error messages and more robust error detection.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
try { | |
const [created] = await db.insert(location).values({ | |
...result.data, | |
userId: event.context.user.id, | |
slug, | |
}).returning(); | |
return created; | |
} | |
catch (e) { | |
const error = e as DrizzleError; | |
if (error.message.includes("Failed query")) { | |
return sendError(event, createError({ | |
statusCode: 409, | |
statusMessage: "Slug must be unique (the location name is used to generate the slug).", | |
})); | |
} | |
throw error; | |
} | |
try { | |
const [created] = await db.insert(location).values({ | |
...result.data, | |
userId: event.context.user.id, | |
slug, | |
}).returning(); | |
return created; | |
} | |
catch (e) { | |
const error = e as DrizzleError; | |
// Check for unique constraint violations more specifically | |
if (error.message.includes("UNIQUE constraint failed") || | |
error.message.includes("location_slug_unique")) { | |
return sendError(event, createError({ | |
statusCode: 409, | |
statusMessage: "Unable to create location. A location with a similar name may already exist. Please try a different name.", | |
})); | |
} | |
throw error; | |
} |
🤖 Prompt for AI Agents
In server/api/locations.post.ts around lines 74 to 91, improve error handling by
replacing the fragile string-based check for "Failed query" with a more specific
error type or code check related to unique constraint violations. Update the
user-facing error message to clarify that the location name must be unique,
since users do not directly control the slug. Adjust the catch block to detect
the unique constraint error more reliably and return a clearer, more accurate
error message accordingly.
Introduce unique constraints for location names per user and implement unique slug generation using
nanoid
andslug
. This update requires data cleanup for existing duplicates and enhances error handling for conflicts.fixes #24
Summary by CodeRabbit
New Features
Chores
Style