Friday, 7 October 2011

Upgrading postgres from 8.4 to 9.1 and LOB permission problems

Today upgraded to new Ubuntu 11.10 and a new version of postgresql came with it. There was some minor problems regarding database migration from 8.4 to 9.1.

Migrating isn't so easy as I would like it to be, but it's doable:

  • You'll need running postgresql 8.4 (old version)
  • From command line, run "pg_dumpall -h localhost -p 5432 -U postgres -W > pg.backup"
  • Now you can uninstall the old version and configure the new version (9.1) to use 5432 port by editing /etc/postgresql/9.1/main/postgresql.conf
  • Now run "sudo -u postgres psql -f pg.backup postgres" (this will import the backup file)

This is the point where everything seemded fine, but it wasnt, because one select showed me error: "ERROR: permission denied for large object xxxxx". In order to solve this problem, this forum helped me where I found this little helpful script that changes the owner of large objects:

do $$
declare r record;
begin
for r in select loid from pg_catalog.pg_largeobject loop
execute 'ALTER LARGE OBJECT ' || r.loid || ' OWNER TO owners_username';
end loop;
end$$;

Connect to each database as 'postgres' and execute this query. It will change all permissions of large objects in that database by setting an owner to the specified value. Btw, list all large objects you can with using these console commands:

sudo -u postgres psql postgres
postgres=# \connect portal4
You are now connected to database "portal4" as user "postgres".
portal4=# \lo_list

In order to connect to postgres using pgadmin, I had to change the postgresql password (see http://doginpool.blogspot.com/2009/06/adding-local-postgres-database-to.html)

5 comments:

  1. Not applied to very big databases on the production when pg_dump and pg_restore takes a lot of time.

    ReplyDelete
  2. Thank you so much for that! So many sins I've brought on myself over the years being clever with postgresql....

    ReplyDelete
  3. Thanks !
    Solved my problem with blobs after moving db from 8.4 to 9.1.

    ReplyDelete
  4. When executing the loop on a table of mine, I get an exception (see below) after 11080 records.

    ERROR: out of shared memory
    HINT: You might need to increase max_locks_per_transaction.
    CONTEXT: SQL statement "ALTER LARGE OBJECT 170641 OWNER TO <>"
    PL/pgSQL function inline_code_block line 7 at EXECUTE statement
    ********** Error **********

    Any ideas on how to fix this?

    ReplyDelete
  5. Late reply, but that error indicates that you have tons of such objects to be modified. Each modification requires a lock and you've reached the lock limit. Either increase maximum lock amount or modify the script to commit after every x (e.g. 1000) loops thereby releasing locks.

    ReplyDelete