What’s New in Snowflake (2025): Features You Shouldn’t Miss

By Ido

We’re only a few months into 2025, and Snowflake is already shipping some of the most impactful platform updates we’ve seen in years. From real-time orchestration to large-scale compute elasticity, these new capabilities are changing how we think about data architecture, performance optimization, and governance.

#4 shocked me – genuinely.

I’ve personally worked with each of the features below – and I don’t say this lightly: they’re already reshaping how I build on Snowflake. If you’re deep in the ecosystem, this isn’t just “nice-to-know” – it’s must-know.

Let’s break it down.

1. VARIANT Data Type Size Increased from 16 MB to 128 MB

Snowflake’s VARIANT data type has long been the go-to for handling semi-structured data like JSON, Avro, and XML. But until now, its 16 MB size limit was a real constraint – especially when working with deeply nested data or payloads from external APIs.

What changed:

The maximum size for a single VARIANT value has now been increased to 128 MB.

Why this matters:

This is an 8x boost in flexibility. You can now ingest large JSON blobs directly, without needing to break them up or preprocess them upstream.

For teams working with modern APIs, event payloads, or cloud-native logs, this means:

  • Simpler pipelines
  • Less transformation overhead
  • Better performance on ingest and query

It’s a foundational change that quietly removes a major bottleneck.

2. Multi-Cluster Warehouse Limit Increased to 300

Until recently, Snowflake capped multi-cluster virtual warehouses at 10 clusters. For many users, that was enough. But for larger orgs with truly concurrent workloads, that ceiling could cause real friction – especially during peak traffic windows.

What changed:

You can now scale a warehouse to 300 concurrent clusters.

Why this matters:

Snowflake is now playing in a different league of horizontal scale. Whether you’re running:

  • Concurrent BI dashboards
  • ELT pipelines
  • ML training jobs
  • Customer-facing analytics apps

you now have the headroom to run them all at once, without thrashing or throttling.

Pro tip:

Set this via:

ALTER WAREHOUSE my_warehouse SET MAX_CLUSTER_COUNT = 300;

Just remember that while this increases power, it also increases responsibility – especially from a cost management perspective (more on that later).

3. Tasks Can Now Run Every 10 Seconds

Snowflake Tasks have evolved steadily over the past few years – from static DAGs to more dynamic orchestration primitives. But one of the most-requested features has always been shorter intervals.

What changed:

Tasks can now run every 10 seconds, down from the previous 1-minute minimum.

Why this matters:

This opens the door to streaming-like behavior – without needing to deploy Kafka, Spark, or external orchestrators.

Use cases:

  • Microbatch data movement
  • Real-time fraud detection
  • Ingest pipelines from event queues
  • External API polling + ingestion

For teams that need just enough real-time without the overhead of true stream processing, this is a game-changer.

4. Asynchronous Execution in Snowflake Scripting

This one genuinely shocked me.

Snowflake Scripting now supports async job execution, which means you can run stored procedures or SQL blocks in parallel inside a single session.

What changed:

Use the ASYNC keyword to fire off procedures that run concurrently. Then use WAIT FOR to synchronize downstream logic.

Why this matters:

This eliminates the need for an external orchestrator (Airflow, Prefect, dbt Cloud, etc.) just to parallelize steps.

Example:

DECLARE job1 RESULTSET;

DECLARE job2 RESULTSET;

BEGIN

  LET job1 = (CALL load_sales_data()) ASYNC;

  LET job2 = (CALL load_inventory_data()) ASYNC;

  WAIT FOR job1, job2;

  CALL update_summary_reports();

END;

Where to use it:

  • Parallel loads from staging tables
  • Async API calls using external functions
  • Concurrent transformation steps across domains

This is arguably the most developer-empowering feature Snowflake has launched in years.

5. Auto Classification of Sensitive Data

Data governance is no longer a “later” problem. Whether you’re subject to GDPR, HIPAA, or internal privacy reviews – classifying and securing sensitive data is a non-negotiable.

What changed:

Snowflake now provides automatic sensitive data classification using built-in machine learning models.

How it works:

You call a single system function:

CALL SYSTEM$CLASSIFY('MY_DB.MY_SCHEMA.MY_TABLE');

Snowflake returns suggested classifications for columns it suspects contain PII, like:

  • Emails
  • Names
  • Credit cards
  • Phone numbers

Why this matters:

This is a massive time saver for compliance teams and a big step forward in democratizing data security.

Bonus:

These classifications can be used to automate:

  • Data masking policies
  • Role-based access controls
  • Auditing rules

Governance just became easier – and smarter.

Final Thoughts: The Year Snowflake Got Really Smart

Across the board, 2025 is the year Snowflake matured from a data warehouse into a full data platform:

  • Async scripting = modern orchestration
  • 10s Tasks = pseudo-streaming
  • VARIANT & Clustering = massive scale
  • Auto PII detection = embedded governance

And all of this still runs within Snowflake’s core model: scalable compute, centralized storage, and SQL-first interfaces.

But there’s one missing piece we have to talk about

Don’t Forget the Elephant in the Room: Cost Efficiency

If Snowflake is giving you more power, it also gives you more ways to accidentally overspend. And that’s exactly where Yuki comes in.

We built Yuki because we saw a recurring pattern:

Even advanced Snowflake users waste 30–70% of their compute credits. Why?

  • Warehouses sit idle before auto-suspending.
  • Queries run on over-provisioned warehouses.
  • Teams manually assign workloads without visibility into real usage.

Yuki changes that.

What we do:

  • Analyze your workload metadata to detect inefficiencies
  • Automatically match query types to optimal warehouse sizes
  • Spin up/down clusters dynamically based on live workload patterns

Real-world result:

“We cut our Snowflake costs by ~60% and got world-class-grade load balancing. Yuki lets us point all our queries at XS compute and it auto-scales everything behind the scenes.”

 –  Alex Ahlstorm, Snowflake Lead @ Angel

It takes <1 hour to deploy.

No agents. No data leaves your environment. Yuki installs directly into your Snowflake account and starts optimizing immediately.

So here’s the big picture:

Snowflake is giving you more control.

Yuki helps you make smarter decisions with that control.

If you’re adopting the 2025 features, but not watching your spend in real time, you’re playing with half a toolkit.

TL;DR

Snowflake is moving fast. The 2025 updates are bold, powerful, and genuinely useful. But with that power comes complexity – and cost.

If you’re excited about what Snowflake is unlocking, but want to stay efficient, scalable, and secure as you grow – Yuki’s here to help you get the most out of every credit.

Ready to see what you’re leaving on the table? Let’s run a free efficiency scan and show you what your workload could look like.

Picture of Ido
Ido
From DevOps and FinOps to Data Architecture and BI leadership, my focus has always been the same: operational efficiency. I started in a well-funded government unit, shifted to a lean startup, and now with Yuki, I’m taking efficiency to the next level. As a founder, I believe in living in two time zones at once: acting fast today while building for tomorrow.

On this page

Free Snowflake Efficiency Report

Explore More

We Value Your Privacy

We use cookies to enhance your browsing experience and analyze site traffic. By continuing to use our site, you agree to our Privacy Policy .
Skip to content