By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
CSV Import

How to Import a CSV into PostgreSQL

Performant, reliable and customizable, Postgres is one of the world’s most relied-on and beloved databases. Today, we’ll look at 5 ways you can import a CSV into Postgres, ranging from one off imports to programmatic ingestion.

Andrew Luo

Andrew is the co-founder & CTO at OneSchema.

May 13, 2024

Performant, reliable and customizable, Postgres is one of the world’s most relied-on and beloved databases. Today, we’ll look at 5 ways you can import a CSV into Postgres, ranging from one off imports to programmatic ingestion. 

Method #1: COPY

The COPY command is a high-performance option to import CSVs into PostgreSQL. Unlike individual INSERT, UPDATE, or DELETE operations, which are transaction-heavy, the COPY command minimizes transactional overhead. It processes data in bulk, significantly speeding up the operation. The COPY command also reads or writes data directly to or from the disk without going through more complex layers of the database engine used for processing regular SQL queries.

Another benefit is that COPY checks for data integrity and constraints (like foreign keys, not-null constraints) during the import process. Any violation results in the termination of the entire operation, unless explicitly handled. 

Use case: One-off server-side imports for CSV files of any size.

Usage example:

  
COPY table_name FROM '/path/to/csv_file.csv' DELIMITER ',' CSV HEADER;

Note: the PostgreSQL server user must have read access to the file.

Method #2: \copy with psql client

\copy is a meta-command of the psql client, essentially a frontend to the COPY command. While COPY is executed on the PostgreSQL server itself, \copy is executed by the client (the user’s machine running psql). It instructs psql to read or write a file on the client side, then sends the data to or receives the data from the PostgreSQL server. Note that the file paths you provide to \copy are relative to the client's file system.

Unlike COPY, the Postgres user does not need read access to the file being imported - only the user of the \copy command. This makes it ideal for importing files that are on your local machine and not on the Postgres server.

Use case: One-off client-side imports for CSV files of any size.

Example usage:

First, install the psql client and connect to your database. In the CLI, run:

  
\copy table_name FROM '/path/to/csv_file.csv' DELIMITER ',' CSV HEADER;

Method #3: pgAdmin GUI

pgAdmin provides a graphical interface to Postgres - and importing CSV files. To import your CSV, navigate to the desired table, right-click, select the import option, and follow the wizard to import your CSV file. 

Use case: User-friendly one-off CSV importing that doesn't require SQL.

Example usage:

Window for pgAdmin 4

Method #4: Tools like DBeaver or DataGrip

Using third-party tools for importing CSV data into PostgreSQL can greatly simplify the process, especially when dealing with complex datasets or when needing to perform data transformations. 

We’ll break down two popular options - DBeaver and DataGrip.

DBeaver

DBeaver is a free, open-source universal database tool that can connect to a variety of databases, including PostgreSQL. It provides a user-friendly GUI for database management. You can easily import CSV files into DBeaver using the import wizard.

The wizard guides you through selecting the CSV file, configuring import settings (like delimiters, encoding), mapping CSV columns to table columns, and handling data types. It allows for some level of data transformation or preprocessing before the data is actually imported. DBeaver can also execute the import process as a batch operation, which can be more efficient than row-by-row processing.

Use case: SQL-less GUI to import CSV files of all sizes, with advanced options for transformations and preprocessing.

Example usage:

  1. Create a CSV database connection
  1. Connect to your target database

  1. Select your source CSV from your CSV connection as the source container

  1. Ensure that the mappings of each of your columns is correct. Complete the wizard and watch DBeaver import your data!

DataGrip

DataGrip is a commercial database IDE from JetBrains that supports PostgreSQL and many other databases. It offers advanced features for database development and administration. Importing CSVs is done through a wizard, and provides functionality like editing data on the fly, validating data types, and previewing the data before finalizing the import. 

Use case: DataGrip is particularly beneficial if you require advanced SQL editing and debugging capabilities during the import process.

Example usage:

Method #5: Custom Script

Writing custom scripts to import CSV data offers the highest level of flexibility and control. This method allows for complex data transformations, error handling, and logging. For instance, you can clean the data, fill in missing values, or convert data types before insertion. We’ll look at using Python and the psycopg2 library, as it’s a popular choice for interacting with PostgreSQL databases. 

Use case: Programmatic ingestion of CSVs with high levels of customization.

Example usage:

First, install psycopg2

  
pip install psycopg2-binary

Then, run this sample script, filling in your relevant details.

  
import csv
import psycopg2
from psycopg2 import sql

# Database connection parameters
db_config = {
    'dbname': 'your_dbname',
    'user': 'your_username',
    'password': 'your_password',
    'host': 'localhost'
}

# CSV file path
csv_file_path = 'path_to_your_csv_file.csv'

# PostgreSQL table name
table_name = 'your_table'

try:
    # Connect to the database
    conn = psycopg2.connect(**db_config)
    cursor = conn.cursor()

    # Open the CSV file
    with open(csv_file_path, newline='') as csvfile:
        reader = csv.reader(csvfile)
        next(reader)  # Skip the header row

        for row in reader:
            # Construct the INSERT query
            query = sql.SQL("INSERT INTO {} VALUES (%s, %s, %s)").format(
                sql.Identifier(table_name))  # Example for 3 columns
            
            # Execute the query
            cursor.execute(query, row)

    # Commit the transaction
    conn.commit()

except Exception as e:
    print(f"An error occurred: {e}")
    conn.rollback()

finally:

  
    # Close the cursor and connection
    if cursor:
        cursor.close()
    if conn:
        conn.close()

Lastly, execute this script in your Python environment. Make sure that the PostgreSQL server is running and accessible from where you run the script.

Conclusion

Today, we’ve taken a look at the 5 most common ways to import CSVs into PostgreSQL. Examine your use case to choose the best option for you. In general, writing your own python scripts, while more work, will give you the most customizability and control over how your data is imported. You can pair your script with orchestration tools like Airflow to regularly ingest CSV files.

If you’re looking for a comprehensive CSV import solution, consider OneSchema. OneSchema offers a powerful option for CSV parsing and importing. Validate and transform files up to 4GB in under a second with plug-and-play components for vanilla JavaScript, React, Angular and Vue projects. 

OneSchema goes beyond an exceptional developer experience by providing a rich CSV error correction toolset, giving end users an easy way to clean and validate their data in a streamlined workflow.

Check out OneSchema - your users will thank you!

Continue reading

Human with plants and dashboards

Make data imports effortless and efficient

Ready to get started?

CSV Import
May 2, 2024

How to Import a CSV into PostgreSQL

Performant, reliable and customizable, Postgres is one of the world’s most relied-on and beloved databases. Today, we’ll look at 5 ways you can import a CSV into Postgres, ranging from one off imports to programmatic ingestion.

Andrew Luo
Andrew is the co-founder & CTO at OneSchema.

Performant, reliable and customizable, Postgres is one of the world’s most relied-on and beloved databases. Today, we’ll look at 5 ways you can import a CSV into Postgres, ranging from one off imports to programmatic ingestion. 

Method #1: COPY

The COPY command is a high-performance option to import CSVs into PostgreSQL. Unlike individual INSERT, UPDATE, or DELETE operations, which are transaction-heavy, the COPY command minimizes transactional overhead. It processes data in bulk, significantly speeding up the operation. The COPY command also reads or writes data directly to or from the disk without going through more complex layers of the database engine used for processing regular SQL queries.

Another benefit is that COPY checks for data integrity and constraints (like foreign keys, not-null constraints) during the import process. Any violation results in the termination of the entire operation, unless explicitly handled. 

Use case: One-off server-side imports for CSV files of any size.

Usage example:

  
COPY table_name FROM '/path/to/csv_file.csv' DELIMITER ',' CSV HEADER;

Note: the PostgreSQL server user must have read access to the file.

Method #2: \copy with psql client

\copy is a meta-command of the psql client, essentially a frontend to the COPY command. While COPY is executed on the PostgreSQL server itself, \copy is executed by the client (the user’s machine running psql). It instructs psql to read or write a file on the client side, then sends the data to or receives the data from the PostgreSQL server. Note that the file paths you provide to \copy are relative to the client's file system.

Unlike COPY, the Postgres user does not need read access to the file being imported - only the user of the \copy command. This makes it ideal for importing files that are on your local machine and not on the Postgres server.

Use case: One-off client-side imports for CSV files of any size.

Example usage:

First, install the psql client and connect to your database. In the CLI, run:

  
\copy table_name FROM '/path/to/csv_file.csv' DELIMITER ',' CSV HEADER;

Method #3: pgAdmin GUI

pgAdmin provides a graphical interface to Postgres - and importing CSV files. To import your CSV, navigate to the desired table, right-click, select the import option, and follow the wizard to import your CSV file. 

Use case: User-friendly one-off CSV importing that doesn't require SQL.

Example usage:

Window for pgAdmin 4

Method #4: Tools like DBeaver or DataGrip

Using third-party tools for importing CSV data into PostgreSQL can greatly simplify the process, especially when dealing with complex datasets or when needing to perform data transformations. 

We’ll break down two popular options - DBeaver and DataGrip.

DBeaver

DBeaver is a free, open-source universal database tool that can connect to a variety of databases, including PostgreSQL. It provides a user-friendly GUI for database management. You can easily import CSV files into DBeaver using the import wizard.

The wizard guides you through selecting the CSV file, configuring import settings (like delimiters, encoding), mapping CSV columns to table columns, and handling data types. It allows for some level of data transformation or preprocessing before the data is actually imported. DBeaver can also execute the import process as a batch operation, which can be more efficient than row-by-row processing.

Use case: SQL-less GUI to import CSV files of all sizes, with advanced options for transformations and preprocessing.

Example usage:

  1. Create a CSV database connection
  1. Connect to your target database

  1. Select your source CSV from your CSV connection as the source container

  1. Ensure that the mappings of each of your columns is correct. Complete the wizard and watch DBeaver import your data!

DataGrip

DataGrip is a commercial database IDE from JetBrains that supports PostgreSQL and many other databases. It offers advanced features for database development and administration. Importing CSVs is done through a wizard, and provides functionality like editing data on the fly, validating data types, and previewing the data before finalizing the import. 

Use case: DataGrip is particularly beneficial if you require advanced SQL editing and debugging capabilities during the import process.

Example usage:

Method #5: Custom Script

Writing custom scripts to import CSV data offers the highest level of flexibility and control. This method allows for complex data transformations, error handling, and logging. For instance, you can clean the data, fill in missing values, or convert data types before insertion. We’ll look at using Python and the psycopg2 library, as it’s a popular choice for interacting with PostgreSQL databases. 

Use case: Programmatic ingestion of CSVs with high levels of customization.

Example usage:

First, install psycopg2

  
pip install psycopg2-binary

Then, run this sample script, filling in your relevant details.

  
import csv
import psycopg2
from psycopg2 import sql

# Database connection parameters
db_config = {
    'dbname': 'your_dbname',
    'user': 'your_username',
    'password': 'your_password',
    'host': 'localhost'
}

# CSV file path
csv_file_path = 'path_to_your_csv_file.csv'

# PostgreSQL table name
table_name = 'your_table'

try:
    # Connect to the database
    conn = psycopg2.connect(**db_config)
    cursor = conn.cursor()

    # Open the CSV file
    with open(csv_file_path, newline='') as csvfile:
        reader = csv.reader(csvfile)
        next(reader)  # Skip the header row

        for row in reader:
            # Construct the INSERT query
            query = sql.SQL("INSERT INTO {} VALUES (%s, %s, %s)").format(
                sql.Identifier(table_name))  # Example for 3 columns
            
            # Execute the query
            cursor.execute(query, row)

    # Commit the transaction
    conn.commit()

except Exception as e:
    print(f"An error occurred: {e}")
    conn.rollback()

finally:

  
    # Close the cursor and connection
    if cursor:
        cursor.close()
    if conn:
        conn.close()

Lastly, execute this script in your Python environment. Make sure that the PostgreSQL server is running and accessible from where you run the script.

Conclusion

Today, we’ve taken a look at the 5 most common ways to import CSVs into PostgreSQL. Examine your use case to choose the best option for you. In general, writing your own python scripts, while more work, will give you the most customizability and control over how your data is imported. You can pair your script with orchestration tools like Airflow to regularly ingest CSV files.

If you’re looking for a comprehensive CSV import solution, consider OneSchema. OneSchema offers a powerful option for CSV parsing and importing. Validate and transform files up to 4GB in under a second with plug-and-play components for vanilla JavaScript, React, Angular and Vue projects. 

OneSchema goes beyond an exceptional developer experience by providing a rich CSV error correction toolset, giving end users an easy way to clean and validate their data in a streamlined workflow.

Check out OneSchema - your users will thank you!

Performant, reliable and customizable, Postgres is one of the world’s most relied-on and beloved databases. Today, we’ll look at 5 ways you can import a CSV into Postgres, ranging from one off imports to programmatic ingestion. 

Method #1: COPY

The COPY command is a high-performance option to import CSVs into PostgreSQL. Unlike individual INSERT, UPDATE, or DELETE operations, which are transaction-heavy, the COPY command minimizes transactional overhead. It processes data in bulk, significantly speeding up the operation. The COPY command also reads or writes data directly to or from the disk without going through more complex layers of the database engine used for processing regular SQL queries.

Another benefit is that COPY checks for data integrity and constraints (like foreign keys, not-null constraints) during the import process. Any violation results in the termination of the entire operation, unless explicitly handled. 

Use case: One-off server-side imports for CSV files of any size.

Usage example:

  
COPY table_name FROM '/path/to/csv_file.csv' DELIMITER ',' CSV HEADER;

Note: the PostgreSQL server user must have read access to the file.

Method #2: \copy with psql client

\copy is a meta-command of the psql client, essentially a frontend to the COPY command. While COPY is executed on the PostgreSQL server itself, \copy is executed by the client (the user’s machine running psql). It instructs psql to read or write a file on the client side, then sends the data to or receives the data from the PostgreSQL server. Note that the file paths you provide to \copy are relative to the client's file system.

Unlike COPY, the Postgres user does not need read access to the file being imported - only the user of the \copy command. This makes it ideal for importing files that are on your local machine and not on the Postgres server.

Use case: One-off client-side imports for CSV files of any size.

Example usage:

First, install the psql client and connect to your database. In the CLI, run:

  
\copy table_name FROM '/path/to/csv_file.csv' DELIMITER ',' CSV HEADER;

Method #3: pgAdmin GUI

pgAdmin provides a graphical interface to Postgres - and importing CSV files. To import your CSV, navigate to the desired table, right-click, select the import option, and follow the wizard to import your CSV file. 

Use case: User-friendly one-off CSV importing that doesn't require SQL.

Example usage:

Window for pgAdmin 4

Method #4: Tools like DBeaver or DataGrip

Using third-party tools for importing CSV data into PostgreSQL can greatly simplify the process, especially when dealing with complex datasets or when needing to perform data transformations. 

We’ll break down two popular options - DBeaver and DataGrip.

DBeaver

DBeaver is a free, open-source universal database tool that can connect to a variety of databases, including PostgreSQL. It provides a user-friendly GUI for database management. You can easily import CSV files into DBeaver using the import wizard.

The wizard guides you through selecting the CSV file, configuring import settings (like delimiters, encoding), mapping CSV columns to table columns, and handling data types. It allows for some level of data transformation or preprocessing before the data is actually imported. DBeaver can also execute the import process as a batch operation, which can be more efficient than row-by-row processing.

Use case: SQL-less GUI to import CSV files of all sizes, with advanced options for transformations and preprocessing.

Example usage:

  1. Create a CSV database connection
  1. Connect to your target database

  1. Select your source CSV from your CSV connection as the source container

  1. Ensure that the mappings of each of your columns is correct. Complete the wizard and watch DBeaver import your data!

DataGrip

DataGrip is a commercial database IDE from JetBrains that supports PostgreSQL and many other databases. It offers advanced features for database development and administration. Importing CSVs is done through a wizard, and provides functionality like editing data on the fly, validating data types, and previewing the data before finalizing the import. 

Use case: DataGrip is particularly beneficial if you require advanced SQL editing and debugging capabilities during the import process.

Example usage:

Method #5: Custom Script

Writing custom scripts to import CSV data offers the highest level of flexibility and control. This method allows for complex data transformations, error handling, and logging. For instance, you can clean the data, fill in missing values, or convert data types before insertion. We’ll look at using Python and the psycopg2 library, as it’s a popular choice for interacting with PostgreSQL databases. 

Use case: Programmatic ingestion of CSVs with high levels of customization.

Example usage:

First, install psycopg2

  
pip install psycopg2-binary

Then, run this sample script, filling in your relevant details.

  
import csv
import psycopg2
from psycopg2 import sql

# Database connection parameters
db_config = {
    'dbname': 'your_dbname',
    'user': 'your_username',
    'password': 'your_password',
    'host': 'localhost'
}

# CSV file path
csv_file_path = 'path_to_your_csv_file.csv'

# PostgreSQL table name
table_name = 'your_table'

try:
    # Connect to the database
    conn = psycopg2.connect(**db_config)
    cursor = conn.cursor()

    # Open the CSV file
    with open(csv_file_path, newline='') as csvfile:
        reader = csv.reader(csvfile)
        next(reader)  # Skip the header row

        for row in reader:
            # Construct the INSERT query
            query = sql.SQL("INSERT INTO {} VALUES (%s, %s, %s)").format(
                sql.Identifier(table_name))  # Example for 3 columns
            
            # Execute the query
            cursor.execute(query, row)

    # Commit the transaction
    conn.commit()

except Exception as e:
    print(f"An error occurred: {e}")
    conn.rollback()

finally:

  
    # Close the cursor and connection
    if cursor:
        cursor.close()
    if conn:
        conn.close()

Lastly, execute this script in your Python environment. Make sure that the PostgreSQL server is running and accessible from where you run the script.

Conclusion

Today, we’ve taken a look at the 5 most common ways to import CSVs into PostgreSQL. Examine your use case to choose the best option for you. In general, writing your own python scripts, while more work, will give you the most customizability and control over how your data is imported. You can pair your script with orchestration tools like Airflow to regularly ingest CSV files.

If you’re looking for a comprehensive CSV import solution, consider OneSchema. OneSchema offers a powerful option for CSV parsing and importing. Validate and transform files up to 4GB in under a second with plug-and-play components for vanilla JavaScript, React, Angular and Vue projects. 

OneSchema goes beyond an exceptional developer experience by providing a rich CSV error correction toolset, giving end users an easy way to clean and validate their data in a streamlined workflow.

Check out OneSchema - your users will thank you!

Stay Connected

Subscribe to receive our newsletter.
Human with plants and dashboards

Make data imports effortless and efficient

Ready to get started?