The name Kitabisa means “we can” in Bahasa Indonesia, the official language of Indonesia, and captures their aspirational ethos as Indonesia’s most popular fundraising platform. Since 2013, Kitabisa has been collecting donations in times of crisis and natural disasters to help millions in need. Pursuing their mission of “channeling kindness at scale,” Kitabisa deploys AI algorithms to foster Southeast Asia’s philanthropic spirit with simplicity and transparency.
Unlike e-commerce platforms that can predict spikes in demand, such as during Black Friday, Kitabisa’s mission of raising funds when disasters like earthquakes strike is by definition unpredictable. This is why the ability to scale up and down seamlessly is critical to our social enterprise.
In 2020, Indonesia’s COVID-19 outbreak coincided with Ramadan. Even in normal times, this is a peak period, as the holy month inspires charitable activity. But during the pandemic, the crush of donations pushed their system beyond the breaking point. The platform went down for a few minutes just as Indonesia’s giving spirit was at its height, creating frustrations for users.
A new cloud beginning
That’s when they realized they needed to embark on a new cloud journey, moving from their monolithic system to one based on microservices. This would enable them to scale up for surges in demand, but also scale down when a wave of giving subsides. They also needed a more flexible database that would allow us to ingest and process the vast amounts of data that flood into our system in times of crisis.
These requirements led us to re-architect Kitabisa’s entire platform on Google Cloud. Guided by a proactive Google Cloud team, they migrated to Google Kubernetes Engine (GKE) for the overall containerized computing infrastructure, and from Amazon RDS to Cloud SQL for MySQL and PostgreSQL, for their managed database services.
The result surpassed our expectations. During the following year’s Ramadan season, we gained a 50% boost in computing resources to easily handle escalating crowdfunding demands on our system. This was thanks to both the seamless scaling of GKE and recommendations from the Google Cloud Partnership team on deploying and optimizing Cloud SQL instances with ProxySQL to optimize the managed database instances.
A progressive journey to kindness at scale
While Kitabisa’s mission has never wavered, their journey to optimized performance took them through several stages before they ultimately landed on the current architecture on Google Cloud.
Origins on a monolithic provider
Kitabisa was initially hosted on DigitalOcean, which only allowed them to run monolithic applications based on virtual machines (VMs) and a stateful managed database. This meant manually adding one VM at a time, which led to challenges in scaling up VMs and core memory when a disaster triggered a spike in donations.
Conversely, when a fundraising cycle was complete, they could not scale down automatically from the high specs of manually provisioned VMs, which was a strain on manpower and budgetary resources.
Transition to containers
To improve scalability, Kitabisa migrated from DigitalOcean to Amazon Web Services (AWS), where they hoped deploying load balancers would provide sufficient automated scaling to meet our network needs. However, they still found manual configurations to be too costly and labor-intensive.
Kitabisa then attempted to improve automation by switching to a microservices-based architecture. But on Amazon Elastic Container Service (Amazon ECS) they hit a new pain point: when launching applications, they needed to ensure that it was compatible with CloudFormation in deployment, which reduced the flexibility of their solution building due to vendor locking.
They decided it was “never too late” to migrate to Kubernetes, which is a more agile containerized solution. Given that they were already using AWS, it seemed natural to move their microservices to Amazon Elastics Kubernetes Service (Amazon EKS). But then soon found that provisioning Kubernetes clusters with EKS was still a manual process that required a lot of configuration work for every deployment.
Unlocking automated scalability
At the height of the COVID-19 crisis, faced with mounting demands on our system, Kitabisa decided it was time to give Google Kubernetes Engine (GKE) a try. Since Kubernetes is a Google-designed solution, it seemed likeliest that GKE would provide the most flexible microservices deployment, alongside better access to new features.
Through a direct comparison with AWS, they discovered that everything from provisioning Kubernetes clusters to deploying new applications became fully automated, with the latest upgrades and minimal manual setups. By switching to GKE, they can now absorb any unexpected surge in donations, and add new services without expanding the size of their engineering team. The transformative value of GKE became apparent when severe flooding hit Sumatra in November 2021, affecting 25,000 people. The system easily handled the 30% spike in donations.
Moving to Cloud SQL and ProxySQL
Kitabisa was also held back by its monolithic database system, which was prone to crashing under heavy demand. Kitabisa started to solve the problem by moving from a stateful DigitalOcean database to a stateless Redis one, which freed them from relying on a single server, giving them better agility and scale.
But the strategy left a major pain point because it still required them to self-manage databases. In addition, they were experiencing high database egress costs due to the need to execute data transfers from a non-Google Cloud database into BigQuery.
In December 2021, Kitabisa migrated their Amazon RDS to Cloud SQL for MySQL, and immediately saved 10% in egress costs per month. But one of the greatest benefits came when the Google Cloud team recommended using the open source proxy for MySQL to improve the scalability and stability of their data pipelines.
Cloud SQL’s compatibility allowed them to use connection pooling tools such as ProxySQL to better load balance the application. Historically, creating a direct connection to a monolithic database was a single point of failure that could end up in a crash. With Cloud SQL plus ProxySQL, they create layers in front of their database instances. It serves as a load balancer that allows them to connect simultaneously to multiple database instances, by creating a primary and a read replica instance. Now, whenever they have a read query, they redirect the query to the read replica instance instead of the primary instance.
This configuration has transformed the stability of their database environment because they can have multiple database instances running at the same time, with the load distributed across all instances. Since switching to Cloud SQL as the managed database, and using ProxySQL, they have experienced zero downtime on the fundraising platform even when a major crisis hits.
They are also saving costs. Rather than having a separate database for each different Kubernetes cluster, they’ve merged multiple database instances into one instance. They now group databases according to business units instead of per service, yielding database cost reductions of 30%.
Streamlining with Terraform deployment
There’s another key way in which Google Cloud managed services have allowed them to optimize the environment: using Terraform as an infrastructure-as-a-code tool to create new applications and upgrades to the platform.
They also managed to automate the deployment of Terraform code into Google Cloud with the help of Cloud Build, and no human intervention. That means the development team can focus on creative tasks, while Cloud Build deploys a continuous stream of new features to Kitabisa.
The combination of seamless scalability, resilient data pipelines, and creative freedom is enabling them to drive the future of the platform, expanding Kitabisa’s mission to inspire people to create a kinder world in other Asian regions.
They believe that having Google Cloud as our infrastructure backbone will be a critical part of the future development, which will include adding exciting new insurtech features. Now firmly established on Google Cloud, they can go further in shaping the future of fundraising to overcome turbulent times.