+62 8561110558
CoHive 101 Mega Kuningan, Jl. DR. Ide Anak Agung Gde Agung No.1, RT.5/2, Setiabudi, Jakarta Selatan 12950



Google Cloud’s flexible solutions and services help you migrate your data and apps to the cloud while modernizing and innovating at your own pace.

When planning your migration to Google Cloud, you start by defining the environments that are involved in the migration. Your starting point can be an on-premises environment, a private hosting environment, or another public cloud environment.

An on-premises environment is an environment where you have full ownership and responsibility. You retain full control over every aspect of the environment, such as cooling, physical security, and hardware maintenance.

In a private hosting environment such as a colocation facility, you outsource part of the physical infrastructure and its management to an external party. This infrastructure is typically shared between customers. In a private hosting environment, you don’t have to manage the physical security and safety services. Some hosting environments let you manage part of the physical hardware, such as servers, racks, and network devices, while others manage that hardware for you. Typically, power and network cabling are provided as a service so you don’t have to manage them. You maintain full control over hypervisors that virtualize physical resources, the virtualized infrastructure that you provision, and workloads that you run on that infrastructure.

public cloud environment has the advantage that you don’t have to manage the whole resource stack by yourself. You can focus on the aspect of the stack that is most valuable to you. Like in a private hosting environment, you don’t have to manage the underlying physical infrastructure. Additionally, you don’t have to manage the resource virtualization hypervisor. You can build a virtualized infrastructure and can deploy your workloads in this new infrastructure. You can also buy fully managed services, where you care only about your workloads, handing off the operational burden of managing runtime environments.


After you define your starting and target environments, you define the workload types and the related operational processes that are in scope for the migration. This document considers two types of workloads and operations: legacy and cloud-native.

Legacy workloads and operations are developed without any consideration for cloud environments. These workloads and operations can be difficult to modify and expensive to run and maintain because they usually don’t support any type of scalability.

Cloud-native workloads and operations are natively scalable, portable, available, and secure. The workloads and operations can help increase developer productivity and agility, because developers can focus on the actual workloads, rather than spending effort to manage development and runtime environments, or dealing with manual and cumbersome deployment processes. Google Cloud also has a shared responsibility model for security. Google Cloud is responsible for the physical security and the security of the infrastructure, while you’re responsible for the security of the workloads you deploy to the infrastructure.

Considering these environment and workload types, your starting situation is one of the following:

  • On-premises or private hosting environment with legacy workloads and operations.
  • On-premises or private hosting environment with cloud-native workloads and operations.
  • Public cloud or private hosting environment with legacy workloads and operations.
  • Public cloud or private hosting environment with cloud-native workloads and operations.

The migration process depends on your starting point.

Migrating a workload from a legacy on-premises environment or private hosting environment to a cloud-native environment, such as a public cloud, can be challenging and risky. Successful migrations change the workload to migrate as little as possible during the migration operations. Moving legacy on-premises apps to the cloud often requires multiple migration steps.

There are three major types of migrations:

  • Lift and shift
  • Improve and move
  • Rip and replace

In the following sections, each type of migration is defined with examples of when to use each type.

In a lift and shift migration, you move workloads from a source environment to a target environment with minor or no modifications or refactoring. The modifications you apply to the workloads to migrate are only the minimum changes you need to make in order for the workloads to operate in the target environment.

A lift and shift migration is ideal when a workload can operate as-is in the target environment, or when there is little or no business need for change. This migration is the type that requires the least amount of time because the amount of refactoring is kept to a minimum.

There might be technical issues that force a lift and shift migration. If you cannot refactor a workload to migrate and cannot decommission the workload, you must use a lift and shift migration. For example, it can be difficult or impossible to modify the source code of the workload, or the build process isn’t straightforward so producing new artifacts after refactoring the source code might not be possible.

Lift and shift migrations are the easiest to perform because your team can continue to use the same set of tools and skills that they were using before. These migrations also support off-the-shelf software. Because you migrate existing workloads with minimal refactoring, lift and shift migrations tend to be the quickest, compared to improve and move or rip and replace migrations.

On the other hand, the results of a lift and shift migration are non-cloud-native workloads running in the target environment. These workloads don’t take full advantage of cloud platform features, such as horizontal scalability, fine-grained pricing, and highly managed services.

In an improve and move migration, you modernize the workload while migrating it. In this type of migration, you modify the workloads to take advantage of cloud-native capabilities, and not just to make them work in the new environment. You can improve each workload for performance, features, cost, or user experience.

If the current architecture or infrastructure of an app isn’t supported in the target environment as it is, a certain amount of refactoring is necessary to overcome these limits.

Another reason to choose the improve and move approach is when a major update to the workload is necessary in addition to the updates you need to make to migrate.

Improve and move migrations let your app leverage features of a cloud platform, such as scalability and high availability. You can also architect the improvement to increase the portability of the app.

On the other hand, improve and move migrations take longer than lift and shift migrations, because they must be refactored in order for the app to migrate. You need to evaluate the extra time and effort as part of the life cycle of the app.

An improve and move migration also requires that you learn new skills.

In a rip and replace migration, you decommission an existing app and completely redesign and rewrite it as a cloud-native app.

If the current app isn’t meeting your goals—for example, you don’t want to maintain it, it’s too costly to migrate using one of the previously mentioned approaches, or it’s not supported on Google Cloud—you can do a rip and replace migration.

Rip and replace migrations let your app take full advantage of Google Cloud features, such as horizontal scalability, highly managed services, and high availability. Because you’re rewriting the app from scratch, you also remove the technical debt of the existing, legacy version.

However, rip and replace migrations can take longer than lift and shift or improve and move migrations. Moreover, this type of migration isn’t suitable for off-the-shelf apps because it requires rewriting the app. You need to evaluate the extra time and effort to redesign and rewrite the app as part of its lifecycle.

A rip and replace migration also requires new skills. You need to use new toolchains to provision and configure the new environment and to deploy the app in that environment.

Before starting your migration, you should evaluate the maturity of your organization in adopting cloud technologies. The Google Cloud Adoption Framework serves both as a map for determining where your business information technology capabilities are now, and as a guide to where you want to be.

You can use this framework to assess your organization’s readiness for Google Cloud and what you need to do to fill in the gaps and develop new competencies, as illustrated in the following diagram.

Architecture of Google Cloud Adoption Framework with four themes and three phases.

The framework assesses four themes:

  • Learn. The quality and scale of your learning programs.
  • Lead. The extent to which your IT departments are supported by a mandate from leadership to migrate to Google Cloud.
  • Scale. The extent to which you use cloud-native services, and how much operational automation you currently have in place.
  • Secure. The capability to protect your current environment from unauthorized and inappropriate access.

For each theme, you should be in one of the following three phases, according to the framework:

  • Tactical. There are no coherent plans covering all the individual workloads you have in place. You’re mostly interested in a quick return on investments and little disruption to your IT organization.
  • Strategic. There is a plan in place to develop individual workloads with an eye to future scaling needs. You’re interested in the mid-term goal to streamline operations to be more efficient than they are today.
  • Transformational. Cloud operations work smoothly, and you use data that you gather from those operations to improve your IT business. You’re interested in the long-term goal of making the IT department one of the engines of innovation in your organization.

When you evaluate the four topics in terms of the three phases, you get the Cloud Maturity Scale. In each theme, you can see what happens when you move from adopting new technologies when needed, to working with them more strategically across the organization—which naturally means deeper, more comprehensive, and more consistent training for your teams.

It’s important to remember that a migration is a journey. You are at point A with your existing infrastructure and environments, and you want to reach point B. To get from A to B, you can choose any of the options previously described.

The following diagram illustrates the path of this journey.

Migration path with four phases.

There are four phases of your migration:

  • Assess. In this phase, you perform a thorough assessment and discovery of your existing environment in order to understand your app and environment inventory, identify app dependencies and requirements, perform total cost of ownership calculations, and establish app performance benchmarks.
  • Plan. In this phase, you create the basic cloud infrastructure for your workloads to live in and plan how you will move apps. This planning includes identity management, organization and project structure, networking, sorting your apps, and developing a prioritized migration strategy.
  • Deploy. In this phase, you design, implement and execute a deployment process to move workloads to Google Cloud. You might also have to refine your cloud infrastructure to deal with new needs.
  • Optimize. In this phase, you begin to take full advantage of cloud-native technologies and capabilities to expand your business’s potential to things such as performance, scalability, disaster recovery, costs, training, as well as opening the doors to machine learning and artificial intelligence integrations for your app.

In the assessment phase, you gather information about the workloads you want to migrate and their current runtime environment.

A key to a successful migration is understanding what apps exist in your current environment–what databases, message brokers, data warehouses, and network appliances exist–and the apps dependencies for each of them. You need to list all of your machines, hardware specifications, operating systems, licenses, and which of the apps and services are used by each of them.

After you take your inventory, you can build your catalog matrix to help you organize your apps into categories based on their complexity and risk in moving to Google Cloud.

The following table is an example catalog matrix.

Doesn’t have dependencies or dependents Has dependencies or dependents
Mission critical
  • Stateless microservices (medium)
  • ERP (hard)
  • OLTP databases (hard)
  • Ecommerce app (hard)
  • Data warehouses (hard)
  • Firewall appliance (can’t)
Non-mission critical
  • Marketing website (easy)
  • Backup and archive (easy)
  • Development and test environments (easy)
  • Batch processing (easy)
  • Backoffice (hard)
  • Data analysis (hard)

This catalog matrix example contains two dimensions of assessment criteria. Your apps might require more dimensions or additional considerations. Create your matrix to include all of the unique requirements of your environment.

As part of the assess phase, your organization needs to start learning about Google Cloud. You need to train and certify your software and network engineers on how the cloud works and what Google Cloud products they can leverage as well as what kind of frameworks, APIs, and libraries they can use to deploy workloads on Google Cloud.

Another important part of the assessment phase is choosing a proof of concept (PoC) and implementing it, or experimenting with Google Cloud products to validate use cases or any areas of uncertainty.

Consider the following use cases:

  • Verifying that a zone can spin up 50,000 virtual CPU cores.
  • Implementing firewall rules for a complex workload.
  • Comparing the performance of your on-premises databases to Cloud SQLCloud SpannerFirestore, or Cloud Bigtable.
  • Experiment with the availability of regional GKE clusters.
  • Testing the internal and external network latency of your apps on Google Cloud.
  • Evaluating the speed and reliability of a Cloud Build deployment pipeline for containers on GKE
  • Comparing Dataflow to Spark on Dataproc.
  • Transferring data to BigQuery and phrasing business critical queries to test correctness.
  • Evaluating Cloud Logging to replace other logging mechanisms.

For each experiment, you measure your business impact, such as one of the following:

  • If you observe a 95% reduction of the launch time to spin up 50,000 virtual CPU cores on Google Cloud compared to your current environment, this reduces your time to market by a certain factor. This reduction also impacts the setup time of your disaster recovery environment by decreasing the downtime of your critical lines of business.
  • If you can have a globally available and always-on disaster recovery plan, you can increase the reliability of your app.
  • If you use cloud-scaling technology, you can lower your total cost of services by scaling down when your resource needs are low, and scaling up on-demand.

Building a total cost of ownership model lets you compare your costs on Google Cloud with the costs you have today. There are tools that can help you, such as the Google Cloud price calculator, and you can also leverage some of our partner offerings. Don’t forget the operational costs of running on-premises or in your own data center–power, cooling, maintenance, and other support services impact the total cost of ownership.

In order to prepare for your migration, you identify apps with features that make them likely first-movers. You can pick just one, or include many apps in your first-mover list. These first-movers let your teams run and test apps in the cloud environment, where they can focus on the migration instead of on the complexity of the apps. Starting with a less complex app lowers your initial risk because later you can apply your team’s new knowledge to harder to migrate apps.

Identifying a first-mover can be complex, but good candidates usually satisfy many of the following workload criteria:

  • Not business critical, so the main line of business isn’t impacted by the migration, because your teams don’t have yet a significant experience with cloud technologies.
  • Not an edge case because it’s easy to apply the same pattern to other workloads that you want to migrate.
  • Can be used to build a knowledge base.
  • Supported by a team that is highly motivated and eager to run on Google Cloud.
  • Moved by a central team that moves other workloads. Moving the first workload leads to more experience in that team, which can prove useful in future workload migrations.
  • A dependency-light workload, for example, a stateless one is easier to move because they can move without impacting other workloads or with minimal configuration changes.
  • Requires minimal app changes or refactoring.
  • Doesn’t need large quantities of data moved.
  • Doesn’t have strict compliance requirements.
  • Doesn’t require third-party proprietary licenses because some providers don’t license their products for the cloud or might require a change in license type.
  • Not impacted by downtime caused by a cutover window. For example, you can export data from your current database and then import it to a database instance on Google Cloud during a planned maintenance window. Synchronizing two database instances to achieve a zero downtime migration is more complicated.

In this phase, you provision and configure the cloud infrastructure and services that will support your workloads on Google Cloud. Building a foundation of critical configurations and services is an evolving process. When you establish your rules, governance, and settings, make sure you allow room for changes later. Avoid making decisions that lock you in to a way of doing things. If you need to change things later on, you want to have options to support those changes.

To plan for your migration, you need to do the following:

  • Establish user and service identities.
  • Design your resource organization.
  • Define groups and roles for resource access.
  • Design your network topology and establish connectivity.

In Google Cloud, you have identity types to choose from:

  • Google Accounts. An account that usually belongs to an individual user that interacts with Google Cloud.
  • Service accounts. An account that usually belongs to an app or a service, rather than to a user.
  • Google groups. A named collection of Google accounts.
  • Google Workspace domains. A virtual group of all the Google accounts that have been created in an organization’s Google Workspace account.
  • Cloud Identity domains. These domains are like Google Workspace domains, but they don’t have access to Google Workspace applications.

For more information, read about each identity type.

For example, you can federate Google Cloud with Active Directory to establish consistent authentication and authorization mechanisms in a hybrid environment.

After establishing the identities you need for your app, you grant them permissions on resources, such as project, folders, or buckets, that your app uses. You can do this by assigning roles to each identity. A role is a collection of permissions. A permission is a collection of operations that are allowed on a resource.

To avoid repeating the same configuration steps, you can organize your resources in different types of structures. These structures are organized in a hierarchy:

  • Organizations are the root of a resource hierarchy and represent a real organization, such as a company. An organization can contain folders and projects. An organization admin can grant permissions on all the resources contained in that organization.
  • Folders are an additional layer of isolation between projects and can be seen as sub-organizations in the organization. A folder can contain other folders and projects. An admin can use the folder to delegate admin rights.
  • Projects are the base-level organization entities and must be used to access other Google Cloud resources. Every resource instance you deploy and use is contained in a project.

Because resources inherit permissions from the parent node, you can avoid repeating the same configuration steps for resources with the same parent. You can find more details about the Identity and Access Management (IAM) inheritance mechanism in the policy inheritance section of the Resource Manager documentation.

Organizations, folders and projects are resources and support a set of operations like all other Google Cloud resources. You can interact with these resources like you would any other Google Cloud resource. For example, you can automate the creation of your hierarchy by using the Resource Manager API. You can organize the resource hierarchy according to your needs. The root node of each hierarchy is always an organization. In the following sections, there are types of hierarchies that you can implement in your organization. Each hierarchy type is characterized by its implementation complexity and its flexibility.

In an environment-oriented hierarchy, you have one organization that contains one folder per environment.

The following diagram shows an example of an environment-oriented hierarchy.

Architecture of an environment-oriented hierarchy.

The multiple environments are development, quality assurance, and production. In each environment, there are multiple instances deployed of the same two apps, My app 1 and My app 2.

This hierarchy is simple to implement because it has only three levels, but it can pose challenges if you have to deploy services that are shared by multiple environments.

In a function-oriented hierarchy, you have one organization that contains one folder per business function, such as information technology and management. Each business function folder can contain multiple environment folders.

The following diagram shows an example of a function-oriented hierarchy.

Architecture of a function-oriented-hierarchy.

In this hierarchy, the multiple business functions are apps, management, and information technology. You can deploy multiple instances of My app, plus shared services, such as Jira and website.

This option is more flexible compared to environment-oriented hierarchies because it gives you the same environment separation, plus it allows you to deploy shared services. On the other hand, a function-oriented hierarchy is more complex to manage than an environment-oriented one, and it doesn’t separate access by business unit, such as retail or financial.

In a granular access-oriented hierarchy, you have one organization that contains one folder per business unit, such as retail or financial. Each business unit folder can contain one folder per business function. Each business function folder can contain one folder per environment.

The following diagram shows an example of a granular access-oriented hierarchy.

Architecture of an access-oriented hierarchy.

In this hierarchy, there are multiple business units, multiple business functions, and environments. You can deploy multiple instances of the My app 1 and My app 2 apps and a shared service, Net host.

This hierarchy is the most flexible and extensible option. On the other hand, you need to spend a greater effort to manage the structure, roles, and permissions. The network topology can also be significantly more complex because the number of projects is higher compared to the other options.

You need to set up the groups and roles to grant the necessary access to resources. In Google Cloud, you can delegate admin access to resources in your organization. At minimum, you need the following roles:

  • An organization admin, who defines IAM policies and the hierarchy of the organization and its resources.
  • A network admin, who creates and configures networks, subnetworks, and network devices, such as Cloud RouterCloud VPN and Cloud Load Balancing. An additional responsibility is to maintain firewall rules in collaboration with the security admin.
  • A security admin, who establishes policies and constraints for the organization and its resources, configures new IAM roles for projects, and maintains visibility on logs and resources.
  • A billing admin, who configures billing accounts and monitors resource usage and spending across the whole organization.

The last step of the plan phase is to set up the network topology and connectivity from your existing environment to Google Cloud.

After creating your projects and establishing identities, you should create at least one Virtual Private Cloud (VPC) network. VPCs let you have a private global addressing space, spanning multiple regions. Inter-regional communication doesn’t use the public internet. You can create VPCs to segregate parts of your apps, or have a shared VPC spanning multiple projects. After setting up VPCs, you should also configure network flow logging and firewall rules logging by using Cloud Logging. For more information about VPCs and how to set them up, see Best practices and reference architectures for VPC design.

Google Cloud offers many hybrid connectivity options to connect your existing environment to your Google Cloud projects:

  • Public internet
  • Cloud VPN
  • Peering
  • Cloud Interconnect

Connecting through the public internet is a simple and inexpensive connection option because it’s backed by a resilient infrastructure that uses Google’s existing edge network. On the other hand, this infrastructure isn’t private or dedicated. The security of this option depends on the apps that exchange data on each connection. For this reason, we don’t recommend using this type of connection to send unencrypted traffic.

Cloud VPN extends your existing network to Google Cloud by using an IPSec tunnel. Traffic is encrypted and travels between the two networks over the public internet. While Cloud VPN requires additional configuration and can impact the throughput of your connection, it is often the best choice if you don’t encrypt traffic at the app level and if you need to access private Google Cloud resources.

Peering lets you establish a connection to Google’s network over a private channel. There are two peering types:

  • Direct peering lets you establish a direct peering connection between your network and Google’s edge network. If you don’t need to access private resources on Google Cloud and if you meet Google’s peering requirements, this is a good option. It doesn’t have any Service Level Agreements (SLA), but this option lets you cut your egress fees over public internet access of Cloud VPN.
  • Carrier peering lets you connect to Google’s network by using enterprise-grade network services managed by a service provider. Although Google doesn’t offer any SLA on this connectivity option, it might be covered by a service provider’s SLA. When evaluating the pricing of this option, you should consider both Google Cloud egress fees and service provider fees.

Cloud Interconnect extends your existing network to Google’s network through a highly available connection. It doesn’t provide any encrypted channel by default, so if you want to use this option, we recommend that you encrypt sensitive traffic at the app level. You can choose between two Cloud Interconnect options:

  • Dedicated Interconnect gives you high bandwidth private connections with a minimum of 10 Gbps, but requires routing equipment in a colocation facility. In other words, you have to meet Google at one of the points of presence (PoPs). Google provides an end-to-end SLA for Dedicated Interconnect connections, and you’re charged based on the dedicated bandwidth and the number of attachments.
  • Partner Interconnect lets you use dedicated high-bandwidth private connections managed by a service provider, without requiring you to configure routing equipment in a Google colocation facility. Google provides an SLA for the connection between Google and the service provider. The service provider might offer an SLA for the connection between you and them. Partner Interconnect is charged based on the connection capacity and the amount of egress traffic through an interconnect. Additionally, you might be charged by the service provider for their service.

After building a foundation for your Google Cloud environment, you can begin to deploy your workloads. You can implement a deployment process and refine it during the migration. You might need to revisit the foundation of your environment as you progress with the migration. New needs can arise as you become more proficient with the new cloud environment, platforms, services, and tools.

When designing the deployment process for your workloads, you should take into account how much automation and flexibility you need. There are multiple deployment process types for you to choose, ranging from a fully manual process to a streamlined, fully automated one.

A fully manual provisioning, configuration, and deployment lets you quickly experiment with the platform and the tools, but it’s also error prone, often not documented, and not repeatable. For these reasons, we recommend that you avoid a fully manual deployment unless you have no other option. For example, you can manually create resources using the Cloud Console such as a Compute Engine instance and manually run the commands to deploy your workload.

A configuration management (CM) tool lets you configure an environment in an automated, repeatable, and controlled way. You can use a CM tool to configure the environment and to deploy your workloads. While this is a better process compared to a fully manual deployment, it typically lacks the features to implement an elaborate deployment, like a deployment with no downtime or a blue-green deployment. Some CM tools let you implement your own deployment logic and can be used to mimic those missing features. However, using a CM tool as a deployment tool can add complexity to your deploy process, and can be more difficult to manage and maintain than a dedicated deployment toolchain. Designing, building, and maintaining a customized deployment solution can be a large additional burden for your operations team.

If you have already invested in containerization, you can go a step further and use a service such as Google Kubernetes Engine (GKE) to orchestrate your workloads. By using Kubernetes to orchestrate your containers, you don’t have to worry about the underlying infrastructure and the deployment logic.

By implementing an automated artifact production and deployment process, such as a continuous integration and continuous delivery (CI/CD) pipeline, you can automate the creation and deployment of artifacts. You can fully automate this process, and you can even insert manual approval steps, if needed.

While you can automate the deployment process by implementing a CI/CD pipeline, you can adopt a similar process for your infrastructure. By defining your infrastructure as code, you can automatically provision all the necessary resources to run your workloads. With this type of process, you make your infrastructure more observable and repeatable. You could also apply a test-driven development approach to your infrastructure. On the other hand, you need to invest time and effort to implement an infrastructure as code process, so take this into account when planning your migration.

Tools like Terraform and Deployment Manager can help you implement Infrastructure as Code on Google Cloud.

After deploying your workloads, you can start optimizing your target environment. In this optimization phase, the following cross-area activities can help you optimize this environment:

  • Build and train your team.
  • Monitor everything.
  • Automate everything.
  • Codify everything.
  • Use managed services instead of self-managed ones.
  • Optimize for performance and scalability.
  • Reduce costs.

When you plan your migration, you can train your development and operation teams to take full advantage of the new cloud environment. Not only can those teams be more efficient with effective training, but they can also choose the best cloud-native tools and services for the job. Training opportunities help to retain technical talent and empower engineers to leverage all of the advantages of Google Cloud.

During this phase, you can also review the business processes that govern those teams. If you find any inefficiency or unnecessary burden in those processes, you have the possibility to refine them and improve them with training.

Monitoring is the key to ensure that everything in your environment is working as expected, and to improve your environments, practices, and processes.

Before you expose your environment to production traffic, we recommend that you design and implement a monitoring system where you define metrics that are important to assess the correct operation of the environment and its components, including your workloads. For example, if you are deploying a containerized infrastructure, you can implement a white-box monitoring system with Prometheus. Or, you can monitor your IoT Core devices with Cloud Logging and Cloud Functions.

We also recommend that you set an alerting system like Cloud Monitoring alerting, that lets you be proactive, not just reactive. You need to set up alerts for critical errors and conditions, but you also need to set up warnings that give you time to correct a potentially disruptive situation before it affects your users.

You can then export Cloud Monitoring metrics logs for long-term storage because Cloud Logging logs have a limited retention period, or run data analytics against the metrics extracted from such logs to gain insights on how your environment is performing and start planning improvements.

Manual operations are exposed to a high error risk and are also time consuming. In most cases, you can automate critical activities such as deployments, secrets exchanges, and configuration updates. Automation leads to cost and time savings, and reduces risk. Teams also become more efficient, because they don’t have to spend effort on repetitive tasks. Automating infrastructure with Cloud Composer and Automating Canary Analysis on Google Kubernetes Engine with Spinnaker are examples of automation on Google Cloud.

When provisioning the target environment on Google Cloud, you should aim to capture as many aspects as you can in code. By implementing processes such as Infrastructure as Code and Policy as Code, you can make your environment fully auditable and repeatable. You can also apply a test-driven development approach to aspects other than code, to have immediate feedback on the modifications you intend to apply to your environment.

Google Cloud has a portfolio of services and products that you can use without having to manage any underlying servers or infrastructure. In the optimization phase, you could either expand your workloads to use such services, or replace some of your existing workloads with these services.

A few examples of managed services are as follows:

  • Using Cloud SQL for MySQL instead of managing your own MySQL cluster.
  • Using AutoML to tag and classify images instead of deploying and maintaining your own machine learning models.
  • Deploying your workloads on GKE instead of using your own self-managed Kubernetes cluster, or even migrating your VMs to containers and running them on GKE.
  • Using App Engine for serverless web hosting.

One of the advantages of migrating to the cloud is access to resources. You can bolster existing resources, add more when you need them, and also remove unneeded resources in a scalable way.

You have more options to optimize performance compared to on-premises deployments:

Google Cloud offers a wide range of tools and pricing options to help you reduce your costs.

For example, if you provisioned Compute Engine instances, you can apply sizing recommendations for those instances.

To reduce your billing, you can analyze your billing reports to study your spending trends and determine which Google Cloud products you are using most frequently. You can even export your billing data to BigQuery or to a file to analyze.

To further reduce your costs, Google Cloud has features, such as sustained use discounts that apply automatic discounts to your Compute Engine billing. You can also purchase committed use contracts in return for discounted prices for Compute Engine instances. For BigQuery, you can also enroll in flat-rate pricing. Google Cloud autoscaling features also help you reduce your billing by scaling down your resources according to client requests. You can reduce monitoring and logging costs by optimizing your usage of Cloud Monitoring and Cloud Logging.

Google Cloud offers various options and resources for you to find the necessary help and support to best leverage Google Cloud services.

If you don’t need dedicated support, you can use these self-service resources:

As a Google Cloud™* Managed Service Provider,  Cloud Ace has been providing one-stop services such as cloud implementation support, operational design, and post-implementation system maintenance to meet the needs of our customers.




Copyright © 2021 Cloud Ace, Inc. All rights reserved.

has been added to the cart. View Cart