This page provides you with instructions on how to extract data from Google Cloud SQL and load it into Redshift. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is Google Cloud SQL?
Google Cloud SQL is a managed database service that lets DBAs set up, maintain, and administer MySQL and PostgreSQL databases on Google Cloud Platform.
What is Redshift?
When it was released in 2013, Amazon Redshift was the first cloud data warehouse. It uses defined schemas, columnar data storage, and massively parallel processing (MPP) architecture to provide a base for analytics reporting.
Getting data out of Google Cloud SQL
In most cases, the easiest way to retrieve data from relational databases is by writing SQL queries.
Google also provides a REST API for administering databases, instances, and other objects in Cloud SQL. So, for example, to retrieve a resource containing information about a database inside a Cloud SQL instance for a particular project, you could call
If your underlying database is PostgreSQL, you can use the
pg_dump command to export data as a CSV-format flat file or a script that you can run to restore the database on any Postgres server. If your underlying database is MySQL, you can use the
mysqldump command to export entire tables and databases in a format you specify (i.e. delimited text, CSV, or SQL queries that would restore the database).
Sample Google Cloud SQL data
The GET call we mentioned would return a database resource, which contains seven properties. Other API calls return different resources.
For data you export via SQL query, pg_dump, or mysqldump, you need a matching table in your data warehouse to receive the data from Cloud SQL. The information_schema database contains all of the metadata information you need to recreate your tables in another environment.
Preparing Google Cloud SQL data
If you don't already have a data structure in which to store the data you retrieve, you'll have to create a schema for your data tables. Then, for each value in the response, you'll need to identify a predefined datatype (INTEGER, DATETIME, etc.) and build a table that can receive them. Google's documentation should tell you what fields are provided by each endpoint, along with their corresponding datatypes.
Complicating things is the fact that the records retrieved from the source may not always be "flat" – some of the objects may actually be lists. This means you'll likely have to create additional tables to capture the unpredictable cardinality in each record.
Loading data into Redshift
Once you know all of the columns you want to insert, use the CREATE TABLE statement in the Redshift data warehouse to set up a table to receive all the data.
Next, migrate your data. It may seem like the easiest course would be to build INSERT statements to add data to your Redshift table row by row. That would be a mistake; Redshift isn't optimized for inserting data one row at a time. If you have a high volume of data to be inserted, a better approach is to copy the data into Amazon S3 and then use the COPY command to load it into Redshift.
Keeping Google Cloud SQL data up to date
At this point you've coded up a script or written a program to get the data you want and successfully moved it into your data warehouse. But how will you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow and resource-intensive.
Instead, identify key fields that your script can use to bookmark its progression through the data and use to pick up where it left off as it looks for updated data. Auto-incrementing fields such as updated_at or created_at work best for this. When you've built in this functionality, you can set up your script as a cron job or continuous loop to get new data as it appears in Google Cloud SQL.
And remember, as with any code, once you write it, you have to maintain it. If Google modifies its API, or the API sends a field with a datatype your code doesn't recognize, you may have to modify the script. If your users want slightly different information, you definitely will have to.
Other data warehouse options
Redshift is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Google BigQuery, PostgreSQL, Snowflake, or Microsoft Azure SQL Data Warehouse, which are RDBMSes that use similar SQL syntax, or Panoply, which works with Redshift instances. Others choose a data lake, like Amazon S3 or Delta Lake on Databricks. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To BigQuery, To Postgres, To Snowflake, To Panoply, To Azure Synapse Analytics, To S3, and To Delta Lake.
Easier and faster alternatives
If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.
Thankfully, products like Stitch were built to move data from Google Cloud SQL to Redshift automatically. With just a few clicks, Stitch starts extracting your Google Cloud SQL data, structuring it in a way that's optimized for analysis, and inserting that data into your Redshift data warehouse.