This Single-Line Command Reduced Our Database Object Management Efforts

Managing SQL tasks manually can be tedious, so we developed a custom command to simplify these tasks. By integrating this tool into our workflow, we have significantly improved our productivity and minimized the risk of human error in SQL task management.

This Single-Line Command Reduced Our Database Object Management Efforts

Managing Many Triggers: The Hidden Chaos in Complex Deployments

 As the volume of SQL components like triggers, functions, and materialized views grows, scattered or unstructured storage can complicate tracking changes, leading to deployment errors and inconsistencies.

The Pain of Handling SQL Manually

  • Inconsistency: Deploying a SQL trigger manually across multiple environments risks errors and inconsistencies. Missing a step or making a mistake can lead to conflicting and unexpected application issues.

  • Error-prone: Executing complex SQL scripts manually increases the risk of typos or missed parameters. This can result in deployment fails, broken functionality, or data loss.

Without a centralized solution, manual updates become inefficient and risky, making it difficult to keep all environments synchronized and reliable.

Losing the Trigger SQL: A Reconstruction Challenge

Losing the original SQL scripts can be a major setback, can face the task of reconstructing the exact logic and syntax.

While it's possible to recover them with multiple ways like this.

SELECT tgname AS trigger_name, 
       tgenabled AS enabled, 
       pg_get_triggerdef(t.oid) AS definition
FROM pg_trigger t
JOIN pg_class c ON c.oid = t.tgrelid
WHERE c.relname = 'attendance_data2';

But the process can be time-consuming and distracting, this not only slow down progress but also redirect valuable effort away from core development tasks.

This could affect the fast-paced environment where speed often takes precedence.

Version Control Woes: The Struggle to Revert and Recover

Without a centralized version control system for SQL scripts, tracking modifications becomes a challenge. there is a struggle to identify:

  • Who made changes
  • When changes were made
  • Why changes were made

Introducing Sync Triggers: A Streamlined Approach

In order to make the SQL management easier
we are using custom management command with manage.py commandline interface,where it offers a powerful solution to simplify SQL component deployment and maintenance.

By consolidating all SQL scripts into a single command, we gain centralized control over SQL logic.

We are using PL/Python it is a procedural language extension for PostgreSQL that allows us to write database functions in Python.

What Sync Triggers Does: Makes Your Task Easy

  • Automates SQL Deployment: Sync Triggers automates the deployment process, ensuring all scripts are applied correctly to the target database.

    Dependencies can be managed in more organised way, ensures that they are applied correctly to all objects.  

  • Centralized Repository: Encourages storing all SQL scripts in a single, designated directory. This central repository makes it easier to locate, manage, and version control your SQL logic.

    This organization can also help prevent accidental deletions or modifications and increases the readability and understanding.
        
    This can quickly find the specific SQL component need, saving time and effort.

  • Version Control Integration:

    As we are centralising the files in directory, is then tracked by Git or other version control system, meaning any changes made to the SQL scripts are automatically recorded.

    this also provides the rollback machanism, reseting the state of the entire repository to a desired state, which overcomes from unintended changes.

  • Language Flexibility:

    Utilizing PL/Python extension for PostgreSQL, which is basically python language, as it is familiar, writing logic becomes easier, its rich libraries can easily be integrated.

    Suited for tasks that go beyond basic database operations, such as interacting with external systems or using advanced data structures.

Benefits: Minimized Obstacles and Enhanced Efficiency

Improved Consistency
Ensures uniform application of SQL scripts and dependencies across different environment.

Streamlined Workflows
Simplifies the deployment process, reducing the chances of delays or bottlenecks

Conflict Resolution
The DROP statement removes any existing functions, preventing conflicts and ensuring that the new function can be created without interference.

Simplified Iteration and Updates
Making updates or iterating on SQL logic is straightforward.

How Can You Implement: Straightforward SQL Execution

    with open(trigger_file) as f:
        trigger_sql = f.read()
    with connections[database_name].cursor() as cursor:
        cursor.execute(trigger_sql)

construct a function to read the SQL file and executes its content on the specified database.


    def add_arguments(self, parser):
        parser.add_argument(
            "--tdb",
            action="store_true",
            help="Applies synctriggers to t-DB",
        )
        parser.add_argument(
            "--pdb",
            action="store_true",
            help="Applies synctriggers to p-DB",
        )

add_argument function defines the command-line arguments that can be used as custom commands
those are flags for the command as parameter which proceeds with respected db

 def handle(self, *args, **options):

        database_name = "default"
        if options["tdb"]:
            database_name = "default"
        if options["pdb"]:
            database_name = "prod"

This performs the logical operations of applying triggers,

specifies the database associated with perticular flag.

 triggers = sorted(
            filter(os.path.isfile, list(Path("ic/triggers").glob("*.sql")))
        )
        for trigger in triggers:
            print("Executing ", trigger)
            sync_trigger(trigger, database_name)

Specify directory in which the executable files are located

Then iterates over each file in directory.

End Goal: Key Functions And Outcomes

python3 manage.py synctriggers --testdb

When custom management command is run,

  1. manage.py sets up the Django environment and runs the synctriggers Python function.
  2. synctriggers.py checks the flag was provided, based on the flag execution is done.
  3. The script searches the specified folder for SQL trigger files.
  4. For each trigger file, it reads the SQL code, and executes it.

Understanding manage.py: Creating Custom Commands

import sys


def main():
    """Run administrative tasks."""
    os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'karma_models.settings')
    try:
        from django.core.management import execute_from_command_line
    except ImportError as exc:
        raise ImportError(
            "Couldn't import Django. Are you sure it's installed and "
            "available on your PYTHONPATH environment variable? Did you "
            "forget to activate a virtual environment?"
        ) from exc
    execute_from_command_line(sys.argv)


if __name__ == '__main__':
    main()
  • This sets up the Django environment by defining the DJANGO_SETTINGS_MODULE variable to project's settings module karma_models.settings

  • Imports execute_from_command_line from django.core.management module, which processes the arguments passed in sys.argv

  • Calls execute_from_command_line(sys.argv) to parse and run the appropriate management command synctriggers.

Folder structure: Centralised Management of SQL Functions

image

This structure demonstrates centralized management of database triggers and functions.

One file example: Applying The Concept

CREATE OR REPLACE FUNCTION update_target_table_on_check_in(
    user_id TEXT,
    time_of_attendance TIMESTAMP WITH TIME ZONE
)
RETURNS VOID
LANGUAGE plpython3u
AS
$$
    insert_query = """
    INSERT INTO target_table (
        user_id, 
        attendance_date, 
        check_in_count, 
        created_at, 
        updated_at
    ) 
    VALUES (%s, %s, %s, %s, %s)
    """
    current_timestamp = plpy.execute("SELECT CURRENT_TIMESTAMP")[0]["current_timestamp"]

    values = [
        user_id,                      
        time_of_attendance.date(),     
        1,                            
        current_timestamp,            
        current_timestamp             
    ]
    plpy.execute(insert_query % tuple(values))

    return None
$$;

DROP TRIGGER IF EXISTS trigger_update_target_table ON primary_table;

CREATE TRIGGER trigger_update_target_table
AFTER INSERT ON primary_table
FOR EACH ROW
EXECUTE FUNCTION update_target_table_on_check_in(
    NEW.userid,
    NEW.time_of_attendance
);

target_table:
image

primary_table:
image

This is a PostgreSQL function that inserts a new record into the target_table table whenever there is a new entry in the primary_table table.

DROP TRIGGER IF EXISTS statement removes any triggers from the table if it exists, preventing conflicts when creating a new trigger with the same name.

The CREATE TRIGGER statement defines a trigger that is executed after a specific event occurs on a specified table. The EXECUTE FUNCTION clause specifies the function that should be called to perform.

Enhancing SQL Management Efficiency

By this approach, we have moved away from the tedious, error-prone process of managing SQL scripts.

By centralizing triggers and functions, integrating version control, and automating deployments, we’ve achieved consistency across environments.

This feature can be further customized based on needs, such as adding Automated Rollbacks, Logging and Monitoring, and a Dry Run Mode (preview simulation).

FeedZap: Read 2X Books This Year

FeedZap helps you consume your books through a healthy, snackable feed, so that you can read more with less time, effort and energy.