Tuesday, February 10, 2015

A Perl wrapper for Oracle's UTL_FILE package

I finally found time to write a simple wraper for Oracle's UTL_FILE package that allows to read a file on the database server with perl.

Here's a simple perl script that demonstrates its use:

use warnings; use strict; use OracleTool qw(connect_db); use OracleTool::UtlFile; my $dbh = connect_db('username/password@database') or die; my $f=OracleTool::UtlFile->fopen($dbh,'DIR','file.txt','r'); my $line; while ($f -> get_line($line)) { print "$line\n"; }

The code is on github: OracleTool.pm and UtlFile.pm.

Checking the value of NLS_LANG in SQL*Plus on Windows

Oracle Support Note *179113.1* offers a, ahem, clever way to display the used value for NLS_LANG on Windows in SQL*Plus.

First, it can be verified if the environment variable NLS_LANG is set:

SQL> host echo %NLS_LANG%

SQL*Plus will answer with either something similar to

AMERICAN_AMERICA.WE8MSWIN1252
or with
%NLS_LANG%

In the first case, the environment variable is set and its value, as displayed by the echo command is the value for NLS_LANG.

If the variable is not set, that is in the second case, the following trick allows to determine its value none the less:

SQL> @.[%NLS_LANG%].

There are again two possibilities how SQL*Plus will react. Either

SP2-0310: unable to open file ".[AMERICAN_AMERICA.WE8ISO8859P1]..sql"
or
SP2-0310: unable to open file ".[%NLS_LANG%]."

In the first case, the value for NLS_LANG is set in the Windows registry (to the value between [ and ]). In the second case, NLS_LANG is not even set in the Windows registry.

Incidentally, this seems to be achieved much easier like so

SQL> select sys_context('userenv', 'language') from dual;

Saturday, February 7, 2015

Little things that make live easier #8: Using ghostscript to merge PDFs

Today, a friend asked me if I could join multiple PDFs into one. Apparently, she tried it with one of the various online mergers, but without success.

A quick google search brought me to the always helpful superuser.com site, particularly this answer.

In short, multiple PDFs can be joined so

c:\> gswin32c.exe -dBATCH -dNOPAUSE -sDEVICE=pdfwrite ^ -sOutputFile="final.pdf" ^ 1.pdf 2.pdf 3.pdf ... n.pdf

Thursday, February 5, 2015

Creating an (import-) SQL file with DBMS_DATAPUMP

DBMS_DATAPUMP can create SQL files from a schema so that these files can later be run to re-create the schema.

This is described in Oracle Note 1519981.1: How to Generate A SQL File Using The DBMS_DATAPUMP_API?. Unfortunately, the note does not explicitely state that the creation of such an sql file consists of two steps, first the schema has to be dumped ordinarly, then, the dumped file has to be turned into the desired SQL file.

Here are the steps to create such an SQL file.

First step: creating the dump file

declare datapump_job number; job_state varchar2(20); begin datapump_job := dbms_datapump.open( operation => 'EXPORT', job_mode => 'SCHEMA', remote_link => null, job_name => 'Export dump file', version => 'LATEST' ); dbms_output.put_line('datapump_job: ' || datapump_job); dbms_datapump.add_file( handle => datapump_job, filename => 'export.dmp', directory => 'DATAPUMP_DIR', filetype => dbms_datapump.ku$_file_type_dump_file); dbms_datapump.start_job( handle => datapump_job, skip_current => 0, abort_step => 0); dbms_datapump.wait_for_job(datapump_job, job_state); dbms_output.put_line('Job state: ' || job_state); dbms_datapump.detach(datapump_job); end; /

Second step: turning the dump file into an SQL file

declare datapump_job number; job_state varchar2(20); begin datapump_job := dbms_datapump.open( operation => 'SQL_FILE', job_mode => 'SCHEMA', remote_link => null, job_name => 'Export SQL file', version => 'LATEST' ); dbms_output.put_line('datapump_job: ' || datapump_job); dbms_datapump.add_file( handle => datapump_job, filename => 'export.dmp', directory => 'DATAPUMP_DIR', filetype => dbms_datapump.ku$_file_type_dump_file); dbms_datapump.add_file( handle => datapump_job, filename => 'schema.sql', directory => 'DATAPUMP_DIR', filetype => dbms_datapump.ku$_file_type_sql_file); dbms_datapump.start_job( handle => datapump_job, skip_current => 0, abort_step => 0); dbms_datapump.wait_for_job(datapump_job, job_state); dbms_output.put_line('Job state: ' || job_state); dbms_datapump.detach(datapump_job); end; /
Source code on github

My question on dba.stackexchange.com

Cloning an Oracle schema with DBMS_DATAPUMP

Wednesday, February 4, 2015

Cloning an Oracle schema with dbms_datapump

I have a schema (FROM_SCHEMA_A) that I need to clone on the same database. So, the cloned schema will go by another name, in my case by TO_SCHEMA_A).

In order to make things a bit more complicated, FROM_SCHEMA_A references objects in another schema (named SCHEMA_B). This other schema won't be cloned.

Schema definitions

Before starting with the object definitions in the schemas, I create the needed schemas:
create user from_schema_A identified by p quota unlimited on users; grant create procedure, create session, create table, create trigger, create view to from_schema_A; create user schema_B identified by p quota unlimited on users; grant create session, create table to schema_B;

Here's the definition for the SCHEMA_B schema. It consisists of one table only:

create table table_b_1 ( a number, b varchar2(10) );

Since FROM_SCHEMA_A references this table, it needs some grants:

grant insert, select on table_b_1 to from_schema_A;

Here's the definition for the FROM_SCHEMA_A schema:

create table table_a_1 ( c number, d varchar2(20) ); insert into table_a_1 values (10, 'ten' ); insert into table_a_1 values (11, 'eleven'); create view view_a_1 as select * from schema_b.table_b_1; create package package_a_1 as function count_a return number; function count_b return number; end package_a_1; / create package body package_a_1 as function count_a return number is ret number; begin select count(*) into ret from table_a_1; return ret; end count_a; function count_b return number is ret number; begin select count(*) into ret from view_a_1; return ret; end count_b; end package_a_1; / create trigger trigger_a before insert on table_a_1 for each row begin insert into schema_b.table_b_1 values (:new.c, :new.d); end trigger_a; / -- There's a trigger on the table, so the -- following insersts should fill table_b_1 -- (in schema_b): insert into table_a_1 values (1, 'one'); insert into table_a_1 values (2, 'two');

Export/Import administrator

In order to perform the export and the import, I create a special export import administrator:
create user exp_imp_admin identified by p; grant exp_full_database, imp_full_database to exp_imp_admin; alter user exp_imp_admin quota unlimited on users;

Performing the export and import

With that user, I am able to export the schema:
connect exp_imp_admin/p declare datapump_job number; job_state varchar2(20); begin datapump_job := dbms_datapump.open( operation => 'EXPORT', job_mode => 'SCHEMA', remote_link => null, job_name => 'Clone schema A, export', version => 'LATEST', compression => dbms_datapump.ku$_compress_metadata ); dbms_output.put_line('datapump_job: ' || datapump_job); dbms_datapump.metadata_filter( handle => datapump_job, name =>'SCHEMA_LIST', value =>'''FROM_SCHEMA_A''' ); dbms_datapump.add_file( handle => datapump_job, filename => 'clone_schema_a_export.log', directory => 'DATAPUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file); dbms_datapump.add_file( handle => datapump_job, filename => 'clone_schema_a.dmp', -- Note, created will be in UPPERCASE! directory => 'DATAPUMP_DIR', filetype => dbms_datapump.ku$_file_type_dump_file); dbms_datapump.start_job (datapump_job); dbms_datapump.wait_for_job(datapump_job, job_state); dbms_output.put_line('Job state: ' || job_state); dbms_datapump.detach(datapump_job); exception when others then dbms_output.put_line(sqlcode); dbms_output.put_line(sqlerrm); if datapump_job is not null then dbms_datapump.detach(datapump_job); end if; end; /

And the following import actually clones the schema. Of particular insterest is the call of dbms_datapump.metadata_remap

declare datapump_job number; job_state varchar2(20); begin datapump_job := dbms_datapump.open( operation => 'IMPORT', job_mode => 'SCHEMA', remote_link => null, job_name => 'Clone schema A, import', version => 'LATEST', compression => dbms_datapump.ku$_compress_metadata ); dbms_output.put_line('datapump_job: ' || datapump_job); dbms_datapump.metadata_remap( datapump_job, 'REMAP_SCHEMA', 'FROM_SCHEMA_A', 'TO_SCHEMA_A'); dbms_datapump.add_file( handle => datapump_job, filename => 'clone_schema_a_import.log', directory => 'DATAPUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file); dbms_datapump.add_file( handle => datapump_job, filename => 'clone_schema_a.dmp', -- Note: export has created the file with UPPERCASE letters directory => 'DATAPUMP_DIR', filetype => dbms_datapump.ku$_file_type_dump_file); dbms_datapump.start_job (datapump_job); dbms_datapump.wait_for_job(datapump_job, job_state); dbms_output.put_line('Job state: ' || job_state); dbms_datapump.detach(datapump_job); exception when others then dbms_output.put_line(sqlcode); dbms_output.put_line(sqlerrm); if datapump_job is not null then dbms_datapump.detach(datapump_job); end if; end; /
Creating an (import-) SQL file with DBMS_DATAPUMP
Source code on github

Monday, January 12, 2015

PL/SQL: FOR r IN (SELECT ... INTO ) ... Hmm?

Today, I came across a funny piece in some sort of legacy code that I think should not compile:
for r in ( select i, t into rec.i, rec.t -- Hmm??? from tq84_foo ) loop .... end loop;

My understanding is that the INTO clause should not be valid because the select statement is embeded in a FOR r IN (SELECT...) loop. Yet, it compiles, at least in Oracle 11i.

Here's a little test script for demonstration purposes:

create table tq84_foo ( i number, t varchar2(10) ); insert into tq84_foo values (1, 'abc'); insert into tq84_foo values (2, 'def'); declare rec tq84_foo%rowtype; begin for r in ( select i, t into rec.i, rec.t -- Hmm??? from tq84_foo ) loop dbms_output.put_line('rec: i= ' || rec.i || ', t=' || rec.t); end loop; end; / drop table tq84_foo purge;

The output is

rec: i= , t= rec: i= , t=
My according question on stackoverflow

Tuesday, January 6, 2015

Little things that make live easier #7: set relativenumber in vim

Today, I stumbled upon an option in vim that I think will make my life easier: :set relativefilenumber

With this option enabled, vim will show how many lines a line is apart from the line with the cursor:

So, if I need to quickly jump to the select statement, I can type 9j (9 because I see that it is 9 lines down, and j because that moves the cursor down).