If you have never performed any analysis on your datasets to see if they would benefit from partitioning, by all means do that. You should now have the skill set to ask those basic questions. The recommended file size depends on the pool type: SQL is between 100 MB and 10 GB, and Spark is between 256 MB and 100 GB. The number of files as well as their size impact performance, so finding the best ratio for the given context requires testing and tuning. Over time, the amount of data for a specific partition might get much larger than the other. That means that those queries would run more slowly than others. Perhaps most brainjammer scenarios uploaded over the past few months were of a single type. If that’s the case, then a partition would be larger than the others, so you might want to find a new way to partition, perhaps on session datetime.
When rows of data exceed 1,000,000 in a partition, the compression improves; therefore, performance increases. You need to keep an eye on that number and make sure the row number is optional on all partitions. You can monitor shuffling on a SQL pool by running an Execution plan that will show the amount of shuffling for a given query. Queries that suffer from shuffling are ones that contain JOINs that include data that is not present on the chosen node that executes the query.