Gemini Batch API Stuck on Pending

We have multiple Gemini Batch jobs that have been stuck in BATCH_STATE_PENDING for over 4 days.

Per the documentation, batch jobs should automatically complete or cancel within 72 hours if unprocessed — so this appears to be a service-level issue.
This is the same pattern we saw last week when Gemini Batch was down for several days before being marked on the status page.

Currently, the Gemini status page ( Google AI Studio ) shows “All systems operational,” but the queue is clearly not processing new batches.

Example response (truncated for privacy):
{
“model”: “models/gemini-2.5-flash”,
“createTime”: “2025-10-09T14:48:30Z”,
“batchStats”: {
“requestCount”: “60”,
“pendingRequestCount”: “60”
},
“state”: “BATCH_STATE_PENDING”
}

Region: us-central1
Jobs affected: 12
Earliest submission: October 9, 2025

We’re paying for Gemini usage but can’t consume the service because batches are never processed — this directly impacts billing and workload throughput.

Please escalate internally or confirm if an incident is already being tracked.

After 5 days it seems like the queue internal to Gemini Batch got moving again. Those batches finally got marked expired, and new tickets are running.

It seems like the Gemini team doesn’t have visibility into when the batching system is clogged and unable to run. This is the second time in 2 weeks the system has been down for 5+ days. The first one was stuck for at least 4 days before they retroactively marked the status page as Gemini Batch API outages. This last issue didn’t even get acknowledged.

If this is going to be a consistent problem, we’ll be migrating our tech stack to another AI batching provider.