-
-
Notifications
You must be signed in to change notification settings - Fork 873
Closed
Labels
bugSomething isn't workingSomething isn't workingsparksqlRelated to the SparkSQL (aka Spark3) dialectRelated to the SparkSQL (aka Spark3) dialect
Description
Search before asking
- I searched the issues and found no similar issues.
What Happened
I ran the lint command for the sparksql
dialect, and I get an error stating that there is an "unparsable section" for the "partitioned by ( . . . )" section.
Expected Behaviour
partitioned by
is valid (Spark documentation), so I would expect the partitioned by
section to not cause the parsing exception.
Observed Behaviour
An error stating that the partitioned by
section is unparsable but it should be supported.
# sqlfluff lint --dialect sparksql ./test.sql
== [test.sql] FAIL
L: 5 | P: 7 | CP01 | Keywords must be consistently lower case.
| [capitalisation.keywords]
L: 6 | P: 1 | LT02 | Expected indent of 4 spaces.
| [layout.indent]
L: 7 | P: 1 | PRS | Line 7, Position 1: Found unparsable section: 'partition
| by (activity_date_partition);'
L: 9 | P: 1 | LT12 | Files must end with a single trailing newline.
| [layout.end_of_file]
WARNING: Parsing errors found and dialect is set to 'sparksql'. Have you configured your dialect correctly?
All Finished 📜 🎉!
How to reproduce
Using test.sql
that looks like the following:
create table if not exists my_table_space.my_test_table (
test_value string,
activity_date_partition date
)
using DELTA
location 's3://some-bucket/test-data/'
partition by (activity_date_partition);
Dialect
sparksql
Version
sqlfluff, version 3.0.3
Python 3.12.2
Configuration
n/a
Are you willing to work on and submit a PR to address the issue?
- Yes I am willing to submit a PR!
Code of Conduct
- I agree to follow this project's Code of Conduct
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingsparksqlRelated to the SparkSQL (aka Spark3) dialectRelated to the SparkSQL (aka Spark3) dialect