Updating mysql table

New and old values for ID are stored in the lookup table, and there about half a million of them.The original approach was to replace PK in all tables for one ID value, commit the changes and move on to the next ID.I recorded the time right after the commit statement at the end of PL/SQL block - that's the start time.Then I kept querying user_jobs view every 2 - 3 sec, until the last of the 5 jobs were gone. The last question on this topic: is user_jobs view is the right place to look in order to determine that's rebuilding is done and how long it took?In response to the Jack Silvey (from Richardson, TX ) review, where he wrote "It is the only way to fly.We also have an absolutely incredible stored procedure that rebuilds all of our indexes concurrently after the load, using the Oracle job scheduler as the mechanism of allowing separate threads in pl/sql": Could you provide more information about that procedure and how to rebuild multiple same-table indexes concurrently using Oracle job scheduler? November 19, 2002 - pm UTC instead of begin execute immediate 'alter index idx1 rebuild'; execute immediate 'alter index idx2 rebuild';end;you can code:declare l_job number;begin dbms_job.submit( l_job, 'execute immediate ''alter index idx1 rebuild'';' ); commit; dbms_job.submit( l_job, 'execute immediate ''alter index idx2 rebuild'';' ); commit;end; Now, just set job_queue_processes Thanks, Tom.We institued the Insert into a dummy table append with nologging, and were able to complete the "update" in under 30 minutes.

The in-place updates would not have worked with the terrabytes of data that we have in our database.

The Ask TOM team will be busy at Open World this month. Murali from old_table;index new_tablegrant on new tableadd constraints on new_tableetc on new_tabledrop table old_tablerename new_table to old_table;you can do that using parallel query, with nologging on most operations generating very little redo and no undo at all -- in a fraction of the time it would take to update the data. I don't have a 100million row table to test with for you but -- the amount of work required to update 1,000,000 indexed rows is pretty large.

If you are at the conference, drop into the Groundbreaker area and say Hello. I want to update and commit every time for so many records ( say 10,000 records). Fortunately, you are probably using partitioning so you can do this easily in parallel -- bit by bit.

Check out all our database development sessions at OOW19 Thanks for the question, Murali. I dont want to do in one stroke as I may end up in Rollback segment issue(s). This is absolutely a viable approach, and one we have used repeatedly.

Asked: November 10, 2002 - pm UTC Answered by: Tom Kyte - Last updated: May 10, 2019 - am UTC Category: Developer - Version: 8.1.7 Viewed 100K times! One of our apps updates a table of several hundred million records. For loop approach for the update was calculated to take 53.7 years to complete!

Search for updating mysql table:

updating mysql table-64updating mysql table-26updating mysql table-39updating mysql table-86

Leave a Reply

Your email address will not be published. Required fields are marked *

One thought on “updating mysql table”