How We Reduced Our Snowflake Ingestion Costs from $53,000 a Year to Just $3,000 Using a Simple Method

Our journey began three years ago when we started integrating Snowflake into our operations.

We were managing a business-critical application that required near real-time data freshness. Initially, we utilized Snowpipe, streaming events from Kafka to S3, and then loading them into Snowflake in real-time. However, after just few days, we discovered that our ingestion process alone was costing us $4,000 monthly—an unsustainable figure.

The Past

Previously, our setup involved the ELK stack, configured with high availability across three nodes for each component: Logstash, Elasticsearch, Kafka with Zookeeper, and multiple Kafka proxies from various regions. 

The setup cost was $45,000 solely for infrastructure expenses, not to mention the ongoing maintenance required to keep this solution operational.

Despite its complexity, Snowflake offered a simpler alternative, though the cost quickly became a concern. 

At that point, we were generating 18GB of data daily, which, in comparison to other scenarios, was relatively modest. 

Yet, the prospect of spending $4,000 a month on ingestion alone was daunting, especially considering potential data growth over time.

Embrace Honesty: Choosing the Right Tool and Overcoming Over-Engineering

We realized we needed a solution. After much deliberation, we identified a clear path forward: transitioning from Kafka to Kinesis.

This move slashed our streaming tool costs by 90%. 

Kafka, while powerful, often requires a setup of at least three instances to ensure redundancy, which can be overkill for many use cases. In contrast, Kinesis offers the flexibility to scale gradually, starting with a single shard and expanding as needed.

The next step was to optimize event batching. We analyzed our workload to determine the optimal file size for batching events together, keeping our required refresh rate in mind.

We found that batching events every 450 seconds into a single file, resulting in approximately 100MB of data on average, was the most efficient for us.

The reason for batching is that Snowflake charges 0.06 credits per 1000 files, and the fewer files there are, the fewer credits are consumed. 

Another factor is the overhead of processing small files. The recommended file size falls between 100-250 MB.

This process was automated through a Lambda function, which then triggered Snowpipe to load the aggregated file into a staging table in Snowflake.

The final step involved flattening the merged JSON events into SQL tables, a process executed by a scheduled task in Snowflake every 10 minutes.

Conclusion

By strategically batching events and setting our refresh rate at 10 minutes, we dramatically reduced our Snowflake ingestion costs from $53,000 annually to approximately $3,000.

This experience taught us the importance of thorough investigation and the right design to manage Snowflake costs effectively. 

There are numerous strategies and optimizations to consider for managing Snowflake pricing efficiently. 

While this post focused on Snowpipe optimization, in future discussions, I plan to explore other methods and strategies for further reducing Snowflake costs.

Feel free to contact me at ido@yukidata.com for tips and questions regarding optimizing your Snowflake environment.

On this page

Free Snowflake Efficiency Report

Explore More

Snowflake vs. Databricks: A Quick Comparison

Snowflake and Databricks are two leading platforms in the data industry, each offering distinct advantages. Snowflake excels in traditional data warehousing, while Databricks focuses on big data and machine learning. Which one should you choose? Explore the strengths of both to make the right decision.

Read More »
Skip to content