Search This Blog

Showing posts with label DBA. Show all posts
Showing posts with label DBA. Show all posts

Sunday, September 29, 2024

Connecting to an Oracle MySQL Database and Invoking a Stored Procedure using Python

In this entry, we will explore how to use Python to connect to an Oracle MySQL database and invoke a stored procedure. This is particularly useful when you're looking to interact with your database programmatically—whether it's for inserting data, querying, or managing business logic through stored procedures.

We will walk through a Python script that connects to the database, generates some random test data, in this case telemetry data for school busses, and invokes a stored procedure to insert the data into multiple tables. Below is the full Python code, and we’ll break down each part of the code in detail to help you understand how it works.

This assumes that you have Oracle MySQL Running, a stored procedure to call, and the proper credentials to access your database, you can modify this script and use a stored procedure of your own!

Here's the full Python script that connects to the MySQL database, generates random test data, and calls the stored procedure InsertEvents.

Script:

import mysql.connector

from mysql.connector import errorcode

import random

import datetime

# Database connection parameters

config = {

    'user': 'yourusername',

    'password': 'yourpassword',

    'host': 'ip address of vm running mysql',

    'port': port you are using,

    'database': 'name of your mysql database',

}

# Function to generate random test data

def generate_test_data():

    asset_id = random.randint(1000, 9999)

    happened_at_time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')

    latitude = round(random.uniform(-90, 90), 6)

    longitude = round(random.uniform(-180, 180), 6)

    heading_degrees = random.randint(0, 360)

    accuracy_meters = round(random.uniform(0.5, 20.0), 2)

    geofence = 'TestGeofence' + str(random.randint(1, 100))

    gps_speed = round(random.uniform(0, 30), 2)

    ecu_speed = round(random.uniform(0, 30), 2)


    return (asset_id, happened_at_time, latitude, longitude, heading_degrees, accuracy_meters, geofence, gps_speed, ecu_speed)


# Function to insert event data by calling the stored procedure InsertEvents

def insert_event_data(cursor, data):

    try:

        cursor.callproc('InsertEvents', data)

        print(f"Successfully inserted event for asset_id {data[0]}")

    except mysql.connector.Error as err:

        print(f"Error: {err}")

        return False

    return True


# Main function to connect to the database and insert events

def main():

    try:

        # Connect to the database

        cnx = mysql.connector.connect(**config)

        cursor = cnx.cursor()


        # Ask the user how many events to generate

        num_events = int(input("Enter the number of events to generate: "))


        # Generate and insert the specified number of test events

        for _ in range(num_events):

            test_data = generate_test_data()

            if insert_event_data(cursor, test_data):

                cnx.commit()

            else:

                cnx.rollback()


        cursor.close()

        cnx.close()


    except mysql.connector.Error as err:

        if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:

            print("Something is wrong with your username and/or password")

        elif err.errno == errorcode.ER_BAD_DB_ERROR:

            print("Database does not exist or is not reachable")

        else:

            print(err)


if __name__ == "__main__":

    main()

Inserting data using the stored procedure:

The insert_event_data() function calls the stored procedure InsertEvents and passes in the data.

cursor.callproc('InsertEvents', data): This method invokes the stored procedure InsertEvents with the data tuple as input parameters.

If the procedure is successful, a success message is printed; otherwise, an error message is displayed.

Connecting to the MySQL Database:

The main () function handles the database connection and inserts a user-specified number of events.

The mysql.connector.connect(**config) establishes a connection to the MySQL database using the provided configuration.

The script asks the user to input the number of events they wish to generate (this will prompt the user in the console).

It then generates the random data and inserts it by invoking the stored procedure. The transactions are committed or rolled back based on whether the insertion was successful.

To run the script:

Make sure your database connection parameters are correct.

Ensure that the stored procedure, like InsertEvents, is defined in your MySQL database.

Run the Python script, which will prompt you to enter the number of events to generate and insert them into the database.

Wrapping up:

This script demonstrates a simple yet powerful way to connect to an Oracle MySQL database using Python and invoke a stored procedure. With a little adjustment, you can use it for various operations, such as fetching data, updating records, or automating tasks. By leveraging Python and MySQL Connector, you can efficiently manage your data and workflows, at no cost! Remember that you can create an always free account in Oracle Cloud Infrastructure (OCI), and provision an always free Compute instance, I recommend Linux and then running MySQL and Python there, all free!

Monday, March 18, 2024

Oracle Fusion Cloud - BIP Performance Tuning Tips and Documentation

During the early days of Oracle Cloud being adopted relative to SaaS technologies like ERP and HCM, it was quite common to develop extracts and reports using complex custom sql data models that would either be downloaded by users or scheduled to extract data and interface it to external systems. Overtime, Oracle has released guidelines and best practices to follow, and efforts like the SQL guardrails have emerged to prevent poor performing custom SQL from impacting environment performance and stability. To that end, I have been aggregating useful links to documentation around this topic from our interactions with Oracle Support over the past few months, which are consolidated in this post.

Links to Documentation:

For scheduled reports, Oracle recommends the following guidelines:

  • Having a temporary backlog (wait queue) is expected behavior, as long as the backlog get cleared over the next 24 hours.
  • If the customer expect the jobs to get picked up immediately, submit via ‘online’ and wait – as long as they not hit 500 sec limit.
  • If there are any jobs that need to be processed with high priority (over the rest), it's advised to mark reports as ‘critical’ so that they picked up by the first available thread.
  • Oracle advises customers to tune their custom reports so that they complete faster and not hold threads for long time.
  • Oracle advises customers schedule less impactful jobs during off-peak or weekend hours – manage scheduler resource smart.
Additionally, note the following:
  • With Release 13 all configuration values including BI Publisher memory guard settings are preset based on your earlier Pod sizing request and cannot be changed.
  • For memory guard the Oracle SaaS performance team has calculated and set the largest values that still provide a robust and stable reporting environment for all users to meet business requirements.
  • The BI Service must support many concurrent users and these settings act as guard rails so an individual report cannot disrupt your entire service and impact the business.
Ultimately, effective instance management is critical for ensuring that your Cloud HCM system is running smoothly and effectively. Allocating resources based on the usage and demand will require co-ordination with various teams. There is a common misunderstanding that each HCM tool such as HDL, HCM extracts, or manual ESS job submissions operates on its own pool of threads. However, in reality, they all share the same ESS pool of threads. It is, therefore, advisable for customers to properly maintain and optimize their runbook to avoid overburdening the system and creating resource constraints.

Lastly, depending on the size of your pods, you have the option to allocate pods for specific tasks. For example:
  • BulkLoading/ Performance testing/Payroll parallel runs: Pod with highest threads is a good candidate be utilized for bulk data loading, payroll parallel runs, and similar resource-intensive tasks such as performance testing.
The below graphic shows how ESS Threads are consumed, to exemplify the statements made prior:



Tuesday, January 8, 2019

E-Business Suite Support Analyzer Bundle Menu Tool and Automatically Updating Analyzers

If you support Oracle EBS R12 then you are familiar with the Oracle Analyzers. They are essentially reports that were designed by Oracle Support to assist with diagnosing issues by providing detailed analysis and even some potential solutions, like applying a relevant generic data fix or suggesting a patch. These analyzers can also speed up your response time when creating an SR by providing the outputs for the relevant analyzers in a product family when you initiate the SR.

The issue with the Analyzers is that there are a lot of them, and maintaining them can become difficult. This is because Analyzers are basically scripts that can be ran manually by your DBA or can be setup to run as a concurrent program (not all Analyzers allow for this). What this means is that it's time consuming to install analyzers and configure them to be ran as concurrent programs and then keeping them up to date, as Oracle releases new versions of the packages behind said Analyzers rather often.

Here is where the Bundle Analyzer tool comes into play. It makes it easy to install all the available Analyzers from a menu that can be easily navigated and utilized by a System Administrator or DBA. You can also quickly load and register analyzers as concurrent programs and also have the ability to uninstall them. Basically, the tool provides a simple interface that makes it really easy to perform these tasks.

You also don't need to worry about data integrity, per Oracle, "Application data is not altered in any way when the Analyzer Bundle Menu is installed or when any Support Analyzer is run."

What this allows you to do, as a support team, is quickly spinning up the analyzers you want to use and update them on a regular basis, maybe when you do your releases, or anytime you wish and as time allows. One of the features of the tool is that you can bulk load all analyzers in a given EBS product family or even all analyzers in all families, as well as individual analyzers, if you don't want to complicate matters by having unnecessary capabilities.

The installation steps are very simple for any DBA or System Administrator, and all the steps are outlined in the below note provided by Oracle:

E-Business Suite Support Analyzer Bundle Menu Tool (Doc ID 1939637.1)

Now, while the Bundle tool greatly simplifies this effort of obtaining analyzers and updating them, there’s still manual work involved, and many of us have asked whether there's some way to automate this further.

In response, Oracle has now released a new feature allowing concurrent programs to be scheduled to address updating analyzers using the bundle tool. This can allow you to automate updating your existing analyzers on a regular basis without human intervention.

All the details in regards to this valuable enhancement to the Analyzer Bundle tool can be seen in the below note provided by Oracle. The note contains all the details around the two concurrent programs involved in the Auto Update feature and how it works.

E-Business Suite Support Analyzer Bundle AutoUpdate Concurrent Program (Doc ID 2377353.1)

This is an extremely powerful tool that can truly enhance your capabilities to support your customers proactively and I urge everyone to explore it.

Regards,

Julio