Automatic Oracle performance monitoring for Pro*C programs

Have you ever whished you had more trace data in your Pro*C based project?
Probably yes, but you where scared by the large effort you would have to do to get useful data.
That is the time for post-processing or the introduction of some smart macros.

This is when sqlTrace comes to the rescue :-)
There is no need to change your *.pc files.
You just need to embed the postprocessor into your makefile and link to sqlTrace.o

Example run
[oracle@vbgeneric sqlTrace]$ ./sample scott/tiger

Connected to ORACLE
Enter enter number (0 to quit): 1

Enter enter number (0 to quit): 2

Enter enter number (0 to quit): 3

Enter enter number (0 to quit): 1x
ORA-01722: invalid number 
=> oops, program terminates
=> First it will show the list of SQL-Statements taking more than 1% of the time
0.058487130 1 sample.pc:32

0.002066788 4 sample.pc:47 select 1 into :b0  from dual where 1=:b1
0.000885475 1 sample.pc:17

=> Than it will show the last 20 SQL-Statements and the SQL-Code
SQLCODE:00000 sample.pc:32
SQLCODE:00000 sample.pc:47 select 1 into :b0  from dual where 1=:b1
SQLCODE:01403 sample.pc:47 select 1 into :b0  from dual where 1=:b1
SQLCODE:01403 sample.pc:47 select 1 into :b0  from dual where 1=:b1
SQLCODE:-1722 sample.pc:47 select 1 into :b0  from dual where 1=:b1
SQLCODE:00000 sample.pc:17

Download the code: https://github.com/frankth/sqlTrace


Secure distributed development despite NSA spying

In 2009 I wanted to setup an environment to do some distributed development with two friends of mine. As this was no public project our requirements have been:
  • No central server with our source code in unencrypted form.
  • Local development should be possible without Internet connection.
Today I might use encfs and dropbox, but at that time I went a different way.
I used git to store the repositories and dar to make encrypted differential backups with chunks of 1 MB and a perl script to build them and an info file with checksum. At our first meeting we created a secret symmetric key with
dd if=/dev/random bs=1 count=32 | base64
You may have to move your mouse around to get enough entropy for 32 bytes. But 32 bytes  should keep eavesdropper from getting useful information for a loooong time.
The backup and restore scripts make sure that the backups have sequential numbers so that you can't restore backup #47 from user alex if you haven't restored his #46 before. Here are the backup and restore scripts.
~/.mkBack.pm should contain something like this:
$me = "frank"; # You should know your name :-)
# replace your-secret with the output of
# dd if=/dev/random bs=1 count=32 | base64

With this setup repositories are located under~/git/sync-repos. in a subdirectory for every user you receive backups from and yourself. So mine are under ~/git/sync-repos/frank . So I push to ~/git/sync-repos/frank/example.git and merge from ~/git/sync-repos/alex/example.git to get changes from Alex.

mkBack.pl creates a backup in newly created directory under $backupBase and tells you what you should mail to your friends.
Your friends should save the files to any directory and run
mkRestore.pl directory
where directory defaults to the current directory.


Fun with Pro*Cob

When we moved a very large Cobol application from MVS and DB2 to Solaris and Oracle, the question was what to do with all Cobol-code that checked the SQL-Codes. In our case the programmers didn't use a copybook with all the constants they neeeded. Most of the time the codes where there numerically. Sometimes SQLCODE got stored in other variables, and checked later. As you might already know, I don't fear patching things. So I decided to look at the code generated by procob. That looked very good. All I had to do was adding a call to a C function after every call to SQLBEX. In that function I saved SQLCODE to ORA-SQLCODE and put the mapped DB2-Code into SQLCODE. As I was on it, I enhanced it a bit more. I printed a stack-trace if there was an "unusual" SQLCODE. In case of a rollback I printed the last 20 statements. I also added a call to a C function before SQLBEX to save a hires timestamp. That way I could count the calls and time used of every server call (sql open, fetch, ...). Using an atexit handler I could walk through a list of all used SQL statements and log every statement (source file name and line number in the file before procob ran) that took more than 1% of the runtime of the program ordered by used time. So if someone comes up with "this and that job went very slow last night" I could easily see which statement produced the problem by looking at the last view lines of the log. That was a real killer feature.


Weak sectors on CDs

Here is a very good explanation of the weak sector problem on CDs:
http://ixbtlabs.com/articles2/magia-chisel/. It is used in some copy protection schemes like SafeDisc. It reminded me of weak data I produced on the C64 where GCR-encoding  was done in software. So you could easily write something that is not well formed. Doing this led to sectors that contained different data every time you read them. Very funny. And confused every copy program I know of.


I love Oracle

I have to admit it. But it was not a love on first sight when I met version 7. But I think that it was more due to fact that it was partly misused in that project for things that should have been done with standard Unix features.
Things I like about Oracle:
  • + Multi-version read consistency
  • + No limit to the number of locks
  • + Always use row locks if useful
  • + No lock escalation (row->page->table)
  • + Fast startup, even if you have a large uncommitted transaction. I could tell you a funny story about a DB2 doing recovery at start-up for more than 24h.
  • + Nice web interface to pinpoint most problems easily. ( Although the old non web Enterprise-Manager was much faster)
  • + Very nice tools for developers to find performance problems (statspack, automatic monitoring with awh*, trace-events!)
  • + Very good support. Although the first answer is normally kind of a standard question, I always got to the point where they acknowledged the bug and fixed it.
  • + They work hard to make it a better product even at the core level.
Some of the things I don't like:
  • - AQ is slow as hell. We used AQ heavily in one part of our application with  small messages. When I looked at a statspack report I couldn't believe it. 700 buffer_gets_per_exec on average to fetch a single message!
  • - Sometimes it's hard to find out what you have to licence. Although I like the fact that I could try out every feature, I would like to be able to limit it's functionality to just the things that I licensed or plan to license. 


Blitter kills

The Amiga was an absolutely marvellous  machine. It had so fantastic special hardware for that time. It was incredible. Two of them where Copper and Blitter. A funny trick to hinder the debugger was to set up the Copper to tell Blitter to fill the memory of your running program with 0x00 the next time the cathode ray reaches a specific position. Your program regularly stopped the Copper from doing this. But if you stopped the CPU with a debugger, Copper and Blitter continue there work and killed your programm or other important things.


Borland C's affair with the Amiga

That was a very funny project of mine at that time. The point was that I was very excited about Borlands C-Compiler for the Atari-ST. It produced 68000-Code that was way better than any other Compiler for the Amiga. It didn't produce assembler text files but binary code directly. So the compiler was fast as well. So decided to port it to the Amiga. Well, I had no source code. I converted to compilers Object format, wrote a wrapper that handled the traps used on Atari's GEM and emulated them as needed. I thought the first trap would open the source file. Wrong! The first trap read the system time to initialise __DATE__ and __TIME__ macros.  I patched the routine that produced the object file to produce the symbol table, relocation information and so on in the Amiga format.
After everything was working I went on the next CEBIT to the Borland booth and looked for someone with no tie (you know the techies). I told him what I did and asked him if they would want to sell there compiler to Amiga market. He said "No", because they would have to support it and they didn't want to go into the Amiga market. But they wouldn't bother if I would sell this add-on as long as the original compiler is still needed.
Well, I didn't do that and so it was just used by very few people.