Disclaimer: I’ve recently become a Snowflake employee, but this post is speaking of my own personal opinions, and in no way represents Snowflake. I certainly do not speak for the company.
I am just a few weeks into learning about Snowflake, and I’m drinking the company kool-aid through a firehose. I thought I’d share a few early impressions coming from a traditional RDBMS background. I think early opinions are valuable, as it is harder to see the challenges and standout features when you’ve been working with a technology for a while.
Side note here: I’m also trying to find the right line to walk when talking about the product of a company I actually work for. Please share your thoughts in the comments if you think I’m missing the mark.
Workloads
While I’ve worked with some very large data warehouses spanning many servers, and spent a bit of time with Db2 features like BLU and DPF (or EEE back in the day), I have spent more of my career working with OLTP and hybrid databases. This means there are a few things I need to get used to. The basis of assuming most queries will take a second or more is a bit different. This isn’t proving difficult so far, but I’m only a few weeks in.
Cloud-Native
I wrote a blog entry a while ago that outlined exactly how one of my former employers defined the term cloud native. I think I’ll need to update that definition for myself as I get further into the technical details of Snowflake. The difference I see so far, architecturally with Snowflake, is that it actually almost literally was born in the cloud, and uses cloud services in nearly all aspects of it’s implementation.
One of the things that is really interesting to me is the efficient use of object storage. Most traditional RDBMSes don’t allow object storage for their data. They might allow it for backup images or for data that is being exported or imported, but not for the table space files for data actually stored within the database. There are a few reasons that they don’t, beyond the fact that they made most of their choices before cloud object storage was an option.
The first reason most RDBMSes don’t use object storage for data is that generally files in cloud object storage are immutable – meaning they cannot be changed in place once they are written. This seems antithetical to an RDBMS methodology where inserts and updates are part of the core functioning. There are a couple of ways Snowflake gets around this. The files in cloud object storage are actually relatively small. The target size is about 16 MB (compressed), though it can vary somewhat. Snowflake calls these micropartitions. I like to think of them as giant pages. If you think about how databases manage storage on pages, it is really somewhat as immutable files. IBM’s Db2 has a minimum unit of storage that can be read or written at the page level (or the extent, depending on the context), and when a change is made to the page, it is essentially read as a whole and written as a whole – though the write often replaces the page in-place. Interesting things can happen with pages during this process like page splits or the way MySQL can merge pages. I’m sure this rough analogy I’m using will break down as I learn more, but it helps me conceptualize things for now. When an update comes in, Snowflake simply replaces the micropartition containing the row with a different micropartition. While this could be problematic depending on your update patterns, it allows some interesting approaches to high availability and time travel queries.
The second thing about using cloud object storage for database data that may seem problematic is speed. Generally the speed at which we can read data into memory is pretty critical to the speed our RDBMS can satisfy queries. I’m fairly sure that getting a 10ms query response time with Snowflake would be a pipe dream. But at the same time, remember that we’re talking about analytics workloads where we’re expecting to have to scan and process a very large set of data to calculate or find an answer. It is interesting to me how much the approach has in common with something more like Spark in that we’re often talking about a number of servers processing through the data from a large number of micropartitions. I suspect that much like IBM Db2’s BLU, the workloads where the greatest performance improvement is seen are those that return only a subset of columns, and those using aggregation functions.
Recovery, High Availability, and Disaster Recovery
This novel (to me!) approach to storage has has interesting implications in the areas of recovery and of time travel querying. The other RDBMSes I’ve worked with enable time travel querying (or even snapshot isolation) through either tracking changes in a separate table or though combing through transaction logs. But think of this in terms of using object storage. Generally object storage offers some options around retaining old copies for versioning. So now, if I want to see what the results of a query would have been yesterday, all I have to do is instead of querying the table micropartitions as they are today, instead query the ones that were there yesterday. This relies on some pretty intense metadata to understand what micropartitions make up a table, what s3 files represent former micropartitions, and how to combine everything together to provide ACID compliance. The DBA in me wonders what the cost of that looks like, but in the vast majority of cases, Snowflake doesn’t even charge customers for work done at this metadata layer.
Expand this idea to then think about recovering the database. Let’s say I keep old copies of these micropartition files, even the old obsolete ones, around for 7 days. I now have the ability to query or restore to any point in that last 7 days. And this ability is not limited to the database as a whole, but can be done on a table by table basis. The metadata kept lets me specify a time interval, a specific time, or even identify a specific statement – likely an update, insert, or delete, and run the query before or after that. Just the ability to query a table as of a certain point in time might prevent me from what would have required a restore with roll forward in a more traditional database environment.
With all that in place and AWS’s guarantee of durability for S3 files of ELEVEN nines(99.999999999%), now why would I even need a backup? The DBA in me just cringes at the idea of not having a backup. But with that kind of durability, maybe a DR copy of my data is enough, and I don’t even need to think about backups. I’ve worked with clients where I discovered they had hundreds of copies of their data because of PTSD from some past recovery issue, but have mostly fallen on the side of keeping many extra copies around for the best chance of recovery. The cloud providers keep their object storage data in three availability zones to achieve high availability and high durability.
I’m trying to keep this at a relatively high level, but I find myself drawing diagrams in my head to talk about storage and some of the differences from more traditional RDBMS methodologies. I’ll definitely be writing an article going into more detail on this.
Ease of Use
I’m not going to say that Snowflake has a magic wand that can solve issues with pesky things like the speed of light, but they do take measures to push complexity below where customers have to be aware of it. Problems like poor data models and tables choosing an inappropriate method of clustering still happen, but many of the tweaks to be made with Snowflake are at these logical levels instead of intense parameter tuning, looking for ideal indexing, and many of the other tasks that we perform to get the best out of a database management system. To me, this does not mean to me that you need to not have DBAs, but that their skill set shifts to a different set of tasks to help optimize workloads and costs, with that skillset being very intertwined with a traditional DBA skill set.
Summary
I’m clearly channeling some of the marketing in all of this, but I’m also getting an intense behind-the-scenes look at things, and I am really liking what I’m seeing. I can’t wait to write a more detailed article on the storage methodology and how it is similar and differs from other RDBMSes that I am familiar with!
Really loved to go through the details you have provided. Thanks for providing great insight.
Will wait for your storage methodology.
Regards Deepak
Hi Ember,
thanks for the insights. I’m looking forward to read more. 😉
Cheers
Roland
Now that’s interesting. My company is investigating a close technical partnership with Snowflake, so you might be able to imagine how curious I am to learn such intricacies; after all they will enable to me support customers and colleagues better.
Thanks a million, dear Ember!
fyi – Db2 Warehouse on Cloud (WHoC) now (as of Jul 2023) allows column organized tables to be stored in object storage https://www.ibm.com/blog/announcement/introducing-the-next-generation-of-db2-warehouse-built-for-always-on-mission-critical-workloads/
In just a few weeks, it sounds like you’re getting a pretty good grasp of the product. You will be a go to expert in no time.
PS. On a whim, I bought a few shares of SNOW stock a while ago. It hasn’t done that well. But now that you’re on board, I have much more confidence.