An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
Spark can write Parquet files with INT96 timestamps in Synapse Spark 3.5, but the configs must be applied to the same Spark session that performs the write.
Recommended settings (before df.write.parquet(...)):
spark.sql.parquet.outputTimestampType = INT96
spark.sql.parquet.writeLegacyFormat = true
These ensure Spark uses INT96 as the physical timestamp type. The int96RebaseMode* settings only control calendar rebasing (CORRECTED vs LEGACY); they do not decide INT96 vs INT64.
Apply the configs via Synapse Studio → Manage → Apache Spark configurations, attach them to the Spark pool/notebook, and restart the session if needed. You can verify they’re applied with:
spark.conf.get("spark.sql.parquet.outputTimestampType")
Important notes:
- To get INT96, write plain Parquet (
df.write.parquet(...)). Delta tables have their own timestamp/precision constraints. - Verify the output using
parquet-tools schema <file>.parquet—you should seeint96for timestamp columns.
If INT64 still appears, double‑check that the column being written is actually TimestampType (not already a LongType).
Hope this helps, Please let us know if you have any questions and concerns.